When there’s nothing real at stake, I might decide to press the button or take the few minutes of pain, in order to get the warm fuzzies. But if there was something that actually mattered on the line, this stuff would go right out the window.
I reject all five horns of the anthropic trilemma. My position is that the laws of probability mostly break down whenever weird anthropic stuff happens, and that the naive solution to the forgetful driver problem is correct. In the hotel with the presumptuous philosopher, I take the bet for an expected $10.
The third horn basically states that the laws of probability break down when weird anthropic things happen. How can you retain a thread of subjective experience if the laws of probability—the very laws that describe anticipation of subjective experience—break down?
Decision-theoretically I believe in UDT. I would take the bet because I do not attach any negative utility to the presumptuous philosopher smiling, but if I had anything to lose, even a penny, I would not take it because each of my copies in the big hotel, each of which has a 50% chance of existing, would stand to lose, a much greater total loss. It would make no sense to ask me what I would do in this situation if I were selfish and did not care about the other copies because the idea of selfishness, at least as it would apply here, depends on anticipated subjective experience.
I don’t think they break quite as badly as the third horn asserts. If I fork myself into two people, I’m definitely going to be each of them, but I’m not going to be Britney Spears.
Most of your analysis of the hotel problem sounds like what I believe, but I don’t see where you get 50%. Do you think you’re equally likely to be in each hotel? And besides, if you’re in the small hotel, the copies in the big hotel still exist, right?
Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.
I don’t think they break quite as badly as the third horn asserts. If I fork myself into two people, I’m definitely going to be each of them, but I’m not going to be Britney Spears.
Can you back this up? Normal probabilities don’t work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don’t have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?
In that case, I’m not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.
Edit: wait, no, that doesn’t sound right. Hm.
Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.
I’m not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn’t a copy? Your earlier statement, “The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.”, seems to be entirely grounded in these ideas.
Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:
Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x
Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I reject the premise. Why should it be self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured? Doesn’t this seem like something people only came up with because of the illusion of subjective anticipation?
EDIT: I just read what you wrote in a different comment on this post:
I don’t actually care about the avoidance of torture as a terminal moral value.
You statements make sense in light of this. My morality is much closer to classical utilitarianism (is that the term?) and may actually be classical utilitarianism upon reflection. I assumed that you did care about the avoidance of torture as a terminal value, since most LessWrongers do. Torture is often used as a stock example of something that causes disutility so, if you are presenting an argument, you will often need to mention this aspect of your value system in order to bridge inferential distance.
When there’s nothing real at stake, I might decide to press the button or take the few minutes of pain, in order to get the warm fuzzies. But if there was something that actually mattered on the line, this stuff would go right out the window.
I reject all five horns of the anthropic trilemma. My position is that the laws of probability mostly break down whenever weird anthropic stuff happens, and that the naive solution to the forgetful driver problem is correct. In the hotel with the presumptuous philosopher, I take the bet for an expected $10.
The third horn basically states that the laws of probability break down when weird anthropic things happen. How can you retain a thread of subjective experience if the laws of probability—the very laws that describe anticipation of subjective experience—break down?
Decision-theoretically I believe in UDT. I would take the bet because I do not attach any negative utility to the presumptuous philosopher smiling, but if I had anything to lose, even a penny, I would not take it because each of my copies in the big hotel, each of which has a 50% chance of existing, would stand to lose, a much greater total loss. It would make no sense to ask me what I would do in this situation if I were selfish and did not care about the other copies because the idea of selfishness, at least as it would apply here, depends on anticipated subjective experience.
I don’t think they break quite as badly as the third horn asserts. If I fork myself into two people, I’m definitely going to be each of them, but I’m not going to be Britney Spears.
Most of your analysis of the hotel problem sounds like what I believe, but I don’t see where you get 50%. Do you think you’re equally likely to be in each hotel? And besides, if you’re in the small hotel, the copies in the big hotel still exist, right?
Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.
Can you back this up? Normal probabilities don’t work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don’t have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?
In that case, I’m not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.
Edit: wait, no, that doesn’t sound right. Hm.
Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.
I’m not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn’t a copy? Your earlier statement, “The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.”, seems to be entirely grounded in these ideas.
Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:
Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x
Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I reject the premise. Why should it be self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured? Doesn’t this seem like something people only came up with because of the illusion of subjective anticipation?
EDIT: I just read what you wrote in a different comment on this post:
You statements make sense in light of this. My morality is much closer to classical utilitarianism (is that the term?) and may actually be classical utilitarianism upon reflection. I assumed that you did care about the avoidance of torture as a terminal value, since most LessWrongers do. Torture is often used as a stock example of something that causes disutility so, if you are presenting an argument, you will often need to mention this aspect of your value system in order to bridge inferential distance.
I think that difference accounts for my remaining confusion.