Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.
I don’t think they break quite as badly as the third horn asserts. If I fork myself into two people, I’m definitely going to be each of them, but I’m not going to be Britney Spears.
Can you back this up? Normal probabilities don’t work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don’t have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?
In that case, I’m not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.
Edit: wait, no, that doesn’t sound right. Hm.
Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.
I’m not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn’t a copy? Your earlier statement, “The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.”, seems to be entirely grounded in these ideas.
Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:
Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x
Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I reject the premise. Why should it be self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured? Doesn’t this seem like something people only came up with because of the illusion of subjective anticipation?
EDIT: I just read what you wrote in a different comment on this post:
I don’t actually care about the avoidance of torture as a terminal moral value.
You statements make sense in light of this. My morality is much closer to classical utilitarianism (is that the term?) and may actually be classical utilitarianism upon reflection. I assumed that you did care about the avoidance of torture as a terminal value, since most LessWrongers do. Torture is often used as a stock example of something that causes disutility so, if you are presenting an argument, you will often need to mention this aspect of your value system in order to bridge inferential distance.
Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.
Can you back this up? Normal probabilities don’t work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don’t have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?
In that case, I’m not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.
Edit: wait, no, that doesn’t sound right. Hm.
Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.
I’m not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn’t a copy? Your earlier statement, “The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.”, seems to be entirely grounded in these ideas.
Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:
Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x
Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I reject the premise. Why should it be self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured? Doesn’t this seem like something people only came up with because of the illusion of subjective anticipation?
EDIT: I just read what you wrote in a different comment on this post:
You statements make sense in light of this. My morality is much closer to classical utilitarianism (is that the term?) and may actually be classical utilitarianism upon reflection. I assumed that you did care about the avoidance of torture as a terminal value, since most LessWrongers do. Torture is often used as a stock example of something that causes disutility so, if you are presenting an argument, you will often need to mention this aspect of your value system in order to bridge inferential distance.
I think that difference accounts for my remaining confusion.