If I believed immortal-dupe-me, it’d be really awesome. I mean, it would be a high-utility outcome. I’d be envious—I’d be envious of anyone given immortality if I don’t get it too. But I’d vastly prefer that outcome to no-duplication, even after it was clear I was mortal-dupe-me. If one person was going to get immortality I’d rather it was a duplicate of me than anyone else except a tiny number of my nearest and dearest.
Pre-duplication me is the same as mortal-dupe-me and immortal-dupe-me, but the two afterwards are not the same person. I’d rather be immortal-dupe-me than mortal-dupe-me (hence envy) but I’d rather immortal-dupe-me existed than didn’t.
We could have some real fun together, for as long as I (mortal-dupe-me) has left. For one thing, we could do a lot of really cool practical research in to immortality and benevolent alien-gods.
Which leads me to my main point, which is that all this is perhaps besides the point. If someone looking just like me walked through the door and gave a speech like that, I simply wouldn’t believe them. There are loads of possibilities to explain that situation that don’t require such wholesale abandonment of science-as-we-know it: practical joke, previously unknown twin, hallucination/dream, etc. It’d probably throw me off a bit, but I like to imagine I wouldn’t jump to such a wild conclusion on such a flimsy pretext. Or, put another way, my prior for the existence of interventionist benevolent alien-gods is very, very low indeed.
Don’t accuse people of fighting the hypothetical, when in addition to questioning if anything like that would really happen, they also respond to the hypothetical as stated.
He said it was his main point, that’s what I responded to because I had nothing to add to his other remarks, which “[may be] besides the point”. Too easy to get off track with hypotheticals, and for a newcomer I thought it might be worth the links.
A problem with training narrow rationality skills is that without training balancing skills to a similar degree, you end up overapplying it. The classic example is that if know how to recognize biased reasoning, but you don’t know to (or fail to, despite knowing) apply the same level of scrutiny to arguments you like as to arguments you don’t like, then every bias you know about makes you stupider.
You may have an overdeveloped sense of “don’t fight the hypothetical” that needs some balance from attention to what questions are important, what answers are applicable in real life. Doug’s response to your hypothetical, which fully addressed the underlying philosophical question, combined with an evaluation of how realistic the scenario is, was a very nice answer, regardless of which part of it was labelled as the main point. Your criticism that it was fighting the hypothetical was just wrong.
Too easy to get off track with hypotheticals, and for a newcomer I thought it might be worth the links.
Bluntly telling them a newcomer they wrong when they happen to be right, and giving a bunch of links so they have to read 3 articles, 2 of them not even relevant, to understand the criticism you are trying to make, is not an effective strategy for community building. (Giving links to interested newcomers can be good, but it should not be confrontational.)
If I believed immortal-dupe-me, it’d be really awesome. I mean, it would be a high-utility outcome. I’d be envious—I’d be envious of anyone given immortality if I don’t get it too. But I’d vastly prefer that outcome to no-duplication, even after it was clear I was mortal-dupe-me. If one person was going to get immortality I’d rather it was a duplicate of me than anyone else except a tiny number of my nearest and dearest.
Pre-duplication me is the same as mortal-dupe-me and immortal-dupe-me, but the two afterwards are not the same person. I’d rather be immortal-dupe-me than mortal-dupe-me (hence envy) but I’d rather immortal-dupe-me existed than didn’t.
We could have some real fun together, for as long as I (mortal-dupe-me) has left. For one thing, we could do a lot of really cool practical research in to immortality and benevolent alien-gods.
Which leads me to my main point, which is that all this is perhaps besides the point. If someone looking just like me walked through the door and gave a speech like that, I simply wouldn’t believe them. There are loads of possibilities to explain that situation that don’t require such wholesale abandonment of science-as-we-know it: practical joke, previously unknown twin, hallucination/dream, etc. It’d probably throw me off a bit, but I like to imagine I wouldn’t jump to such a wild conclusion on such a flimsy pretext. Or, put another way, my prior for the existence of interventionist benevolent alien-gods is very, very low indeed.
EDIT: Just concerning the main point / last paragraph:
Don’t fight the hypothetical, otherwise the answer to any kind of Parfit’s Hitchhiker or Newcomb’s Problem class quagmire would be “I’ve probably been duped, there is no such Omega, and if there is, it ain’t offering boxes.”
Don’t accuse people of fighting the hypothetical, when in addition to questioning if anything like that would really happen, they also respond to the hypothetical as stated.
He said it was his main point, that’s what I responded to because I had nothing to add to his other remarks, which “[may be] besides the point”. Too easy to get off track with hypotheticals, and for a newcomer I thought it might be worth the links.
A problem with training narrow rationality skills is that without training balancing skills to a similar degree, you end up overapplying it. The classic example is that if know how to recognize biased reasoning, but you don’t know to (or fail to, despite knowing) apply the same level of scrutiny to arguments you like as to arguments you don’t like, then every bias you know about makes you stupider.
You may have an overdeveloped sense of “don’t fight the hypothetical” that needs some balance from attention to what questions are important, what answers are applicable in real life. Doug’s response to your hypothetical, which fully addressed the underlying philosophical question, combined with an evaluation of how realistic the scenario is, was a very nice answer, regardless of which part of it was labelled as the main point. Your criticism that it was fighting the hypothetical was just wrong.
Bluntly telling them a newcomer they wrong when they happen to be right, and giving a bunch of links so they have to read 3 articles, 2 of them not even relevant, to understand the criticism you are trying to make, is not an effective strategy for community building. (Giving links to interested newcomers can be good, but it should not be confrontational.)
You are reading a whole lot into very little. I’m tapping out, but am available via PM.