I feel like this is of a kind with the recent MWI thread :D What’s so great about “really existing, with density epsilon” that’s so much better than “happens with probability epsilon?” This may just be a case of needing to indoctrinate people in decision-fu a bit more.
What’s so great about “really existing, with density epsilon” that’s so much better than “happens with probability epsilon?” This may just be a case of needing to indoctrinate people in decision-fu a bit more.
Not sure what you mean by “decision-fu”, but there’s nothing in current decision theories that says you have to treat these the same. See also this post of mine which may be relevant.
What’s so great about “really existing, with density epsilon” that’s so much better than “happens with probability epsilon?”
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan (unless the probability that we’ll wipe ourselves out anyway is on the same order). Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure (and this plan, if executed at all, will be executed by all but epsilon versions of humanity throughout the spatially infinite universe). On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be. All our experiences would presumably be “experienced much less” in some sense, but what does that mean exactly, and why should I care? The decision-fu I know does not make my confusion go away.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan
Hm. Okay. Maybe you could explain it to me then. Why is it a terrible plan, even though all the future people who count will think it was great? None of these futures actually exist to choose between, you’re just planning based on a model of the world. Why not just plan so that the people who matter in your model will have a good time? “When I am, Death is Not, and when Death is, I am Not,” so why should we weight it negatively?
(I’d actually be interested in how you’d answer this).
My favorite answer: It’s just like how you shouldn’t do meth. “Meth” here refers to any drug that rewires your utility function so that you feel really great, right until you drop dead, though you don’t care about that part as much when you’re on meth. If you just want to make your future self happy by their own standards, you should do meth—it feels great once you do it!
Similarly, in the case of either quantum or classical russian roulette, you may be rich if you win, but “zooming in” on only the person who wins is a type of trying to make this model person happy by their own standards. This is a generally flawed decision procedure. You should be able to choose based on your own standards, not the standards of the people who live / are on meth.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan
That means that the utility of a positive singularity (on a scale where the status quo is at 0) is less than −10^9 times the utility of wiping out humanity.
Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure … On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be.
This is left as an exercise. Hint: the previous paragraph of mine may be relevant.
Doesn’t need to affect your decisions to have a psychological impact.
EDIT (x2): Imagine we found out we’re surrounded by Cthulhu-type monsters on all sides, but luckily we’d be certain they can’t causally interact with us. Business as usual, then? Also, wouldn’t that imply e.g. that if a relative of yours on a spaceship were going over the cosmological horizon, then “relative = dead, stopped existing” and “relative just forever outside my reach” would be considered identical, because both reduce to the same decision tree? I’d disagree with that, too. Different states of mind can lead to the same actions, yet the difference may matter epiphenomenologically.
Doesn’t need to affect your decisions to have a psychological impact.
At that point I don’t see why you’d call it “implications”. What are the implications of a spider getting thrown on your face? You get lots of fear, but it doesn’t have implications for ideal morality and rationality besides “don’t throw spiders on people’s faces”.
I don’t want to hurt your sanity, but we’ve always been surrounded by Cthulhu-type monsters on all sides, and they can’t interact with us. And yeah, business as usual on that one.
And recall, my comparison was not between really existing and being dead—it was between definitely existing with some density, and probably existing with some probability. If your relative definitely exists with a density of 1 in that patch of spacetime over there (unit is “relatives per patch of spacetime you can point to”), then the comparison is to a case where they probably exist with probability 1, which is kinda boring :P
I agree that state of mind is important. But that’s always important, and thus not very informative :)
I feel like this is of a kind with the recent MWI thread :D What’s so great about “really existing, with density epsilon” that’s so much better than “happens with probability epsilon?” This may just be a case of needing to indoctrinate people in decision-fu a bit more.
Not sure what you mean by “decision-fu”, but there’s nothing in current decision theories that says you have to treat these the same. See also this post of mine which may be relevant.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan (unless the probability that we’ll wipe ourselves out anyway is on the same order). Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure (and this plan, if executed at all, will be executed by all but epsilon versions of humanity throughout the spatially infinite universe). On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be. All our experiences would presumably be “experienced much less” in some sense, but what does that mean exactly, and why should I care? The decision-fu I know does not make my confusion go away.
Hm. Okay. Maybe you could explain it to me then. Why is it a terrible plan, even though all the future people who count will think it was great? None of these futures actually exist to choose between, you’re just planning based on a model of the world. Why not just plan so that the people who matter in your model will have a good time? “When I am, Death is Not, and when Death is, I am Not,” so why should we weight it negatively?
(I’d actually be interested in how you’d answer this).
My favorite answer: It’s just like how you shouldn’t do meth. “Meth” here refers to any drug that rewires your utility function so that you feel really great, right until you drop dead, though you don’t care about that part as much when you’re on meth. If you just want to make your future self happy by their own standards, you should do meth—it feels great once you do it!
Similarly, in the case of either quantum or classical russian roulette, you may be rich if you win, but “zooming in” on only the person who wins is a type of trying to make this model person happy by their own standards. This is a generally flawed decision procedure. You should be able to choose based on your own standards, not the standards of the people who live / are on meth.
That means that the utility of a positive singularity (on a scale where the status quo is at 0) is less than −10^9 times the utility of wiping out humanity.
This is left as an exercise. Hint: the previous paragraph of mine may be relevant.
Doesn’t need to affect your decisions to have a psychological impact.
EDIT (x2): Imagine we found out we’re surrounded by Cthulhu-type monsters on all sides, but luckily we’d be certain they can’t causally interact with us. Business as usual, then? Also, wouldn’t that imply e.g. that if a relative of yours on a spaceship were going over the cosmological horizon, then “relative = dead, stopped existing” and “relative just forever outside my reach” would be considered identical, because both reduce to the same decision tree? I’d disagree with that, too. Different states of mind can lead to the same actions, yet the difference may matter epiphenomenologically.
At that point I don’t see why you’d call it “implications”. What are the implications of a spider getting thrown on your face? You get lots of fear, but it doesn’t have implications for ideal morality and rationality besides “don’t throw spiders on people’s faces”.
“implications” has too many implications...
I don’t want to hurt your sanity, but we’ve always been surrounded by Cthulhu-type monsters on all sides, and they can’t interact with us. And yeah, business as usual on that one.
And recall, my comparison was not between really existing and being dead—it was between definitely existing with some density, and probably existing with some probability. If your relative definitely exists with a density of 1 in that patch of spacetime over there (unit is “relatives per patch of spacetime you can point to”), then the comparison is to a case where they probably exist with probability 1, which is kinda boring :P
I agree that state of mind is important. But that’s always important, and thus not very informative :)