What’s so great about “really existing, with density epsilon” that’s so much better than “happens with probability epsilon?”
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan (unless the probability that we’ll wipe ourselves out anyway is on the same order). Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure (and this plan, if executed at all, will be executed by all but epsilon versions of humanity throughout the spatially infinite universe). On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be. All our experiences would presumably be “experienced much less” in some sense, but what does that mean exactly, and why should I care? The decision-fu I know does not make my confusion go away.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan
Hm. Okay. Maybe you could explain it to me then. Why is it a terrible plan, even though all the future people who count will think it was great? None of these futures actually exist to choose between, you’re just planning based on a model of the world. Why not just plan so that the people who matter in your model will have a good time? “When I am, Death is Not, and when Death is, I am Not,” so why should we weight it negatively?
(I’d actually be interested in how you’d answer this).
My favorite answer: It’s just like how you shouldn’t do meth. “Meth” here refers to any drug that rewires your utility function so that you feel really great, right until you drop dead, though you don’t care about that part as much when you’re on meth. If you just want to make your future self happy by their own standards, you should do meth—it feels great once you do it!
Similarly, in the case of either quantum or classical russian roulette, you may be rich if you win, but “zooming in” on only the person who wins is a type of trying to make this model person happy by their own standards. This is a generally flawed decision procedure. You should be able to choose based on your own standards, not the standards of the people who live / are on meth.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan
That means that the utility of a positive singularity (on a scale where the status quo is at 0) is less than −10^9 times the utility of wiping out humanity.
Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure … On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be.
This is left as an exercise. Hint: the previous paragraph of mine may be relevant.
Suppose that I had a plan that will wipe out humanity with probability 0.999999999, but will lead to a positive Singularity with probability 0.000000001 minus epsilon. I understand why that would be a terrible plan (unless the probability that we’ll wipe ourselves out anyway is on the same order). Now suppose I had a plan that will wipe out 0.999999999 of humanity’s measure, but lead to a positive Singularity in all but epsilon of the remaining measure (and this plan, if executed at all, will be executed by all but epsilon versions of humanity throughout the spatially infinite universe). On reflection, after understanding better, would we conclude that this is also a terrible plan? I strongly suspect so, but I can not explain exactly why it would be. All our experiences would presumably be “experienced much less” in some sense, but what does that mean exactly, and why should I care? The decision-fu I know does not make my confusion go away.
Hm. Okay. Maybe you could explain it to me then. Why is it a terrible plan, even though all the future people who count will think it was great? None of these futures actually exist to choose between, you’re just planning based on a model of the world. Why not just plan so that the people who matter in your model will have a good time? “When I am, Death is Not, and when Death is, I am Not,” so why should we weight it negatively?
(I’d actually be interested in how you’d answer this).
My favorite answer: It’s just like how you shouldn’t do meth. “Meth” here refers to any drug that rewires your utility function so that you feel really great, right until you drop dead, though you don’t care about that part as much when you’re on meth. If you just want to make your future self happy by their own standards, you should do meth—it feels great once you do it!
Similarly, in the case of either quantum or classical russian roulette, you may be rich if you win, but “zooming in” on only the person who wins is a type of trying to make this model person happy by their own standards. This is a generally flawed decision procedure. You should be able to choose based on your own standards, not the standards of the people who live / are on meth.
That means that the utility of a positive singularity (on a scale where the status quo is at 0) is less than −10^9 times the utility of wiping out humanity.
This is left as an exercise. Hint: the previous paragraph of mine may be relevant.