I don’t think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don’t care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don’t have place a high marginal value on that.
A clarification: suppose Roko throws such a qGrenade (TM) at me, and I get $100. I will become angry and my attempt to inflict violence upon Roko. However, that is not because I’m sad about the 50% of parallel, untouchable universes where I’m dead. Instead, that is because Roko’s behavior is strong evidence that in the future he may do dangerous things; righteous anger now (and, perhaps, violence) is simply intended to reduce the measure of my current “futures” where Roko kills me.
On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them.
Like Sly, I don’t put much value in “versions” of me I can’t interact with. (The “much” is there because, of course, I don’t know with 100% certainty how the universe works, so I can’t be 100% sure what I can interact with.) But my “future selves” are in a kind of interaction with me: what I do influences which of those future selves I’ll become. The value assigned to them is akin to the value someone in free-fall assigns to the rigidity of the surface below them: they aren’t angry because (say) the pavement is hard, in itself; they are angry because it implies a squishy future for themselves. On the other hand, they really don’t care about the surface they’ve fallen from.
On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them.
With this in mind it seems that you treat a qGrenade in exactly the same way you would treat a pseudo-random grenade. You don’t care whether the probability was quantum or just ‘unknown’. My reasoning may be very slightly different but in this regard we are in agreement.
Yep. Grenades in MY past are always duds, otherwise I wouldn’t be here to talk about them. It doesn’t matter if they were fake, or malfunctioned, or had a pseudorandom or quantum probability to blow up. Past throwers of grenades are only relevant in the sense that they are evidence of future grenade-throwing.
Grenades in my future are those that I’m concerned about. With regards to people intending to throw grenades at me, the only distinction is how sure I am I’ll live; even something deterministic but hard to compute (for me) I consider a risk, and I’d be angry with the presumptive thrower.
(A fine point: I would be less angry to find out that someone who threw a grenade at me knew it wouldn’t blow, even if I didn’t know it at the time. I’d still be pissed, though.)
Yep. Grenades in MY past are always duds, otherwise I wouldn’t be here to talk about them. It doesn’t matter if they were fake, or malfunctioned, or had a pseudorandom or quantum probability to blow up. Past throwers of grenades are only relevant in the sense that they are evidence of future grenade-throwing.
I would still kill them, even if I knew they were now completely reformed or impotent. If convenient, I’d beat them to death with a single box just to ram the point home.
Is there something wrong with the parent beyond perhaps being slightly awkward in expression?
Tim seems to be pointing out that MWI itself doesn’t say anything about maximising nor anything about what you should try to maximise. This corrects a misleading claim in the quote.
I don’t think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don’t care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don’t have place a high marginal value on that.
Exactly my view.
A clarification: suppose Roko throws such a qGrenade (TM) at me, and I get $100. I will become angry and my attempt to inflict violence upon Roko. However, that is not because I’m sad about the 50% of parallel, untouchable universes where I’m dead. Instead, that is because Roko’s behavior is strong evidence that in the future he may do dangerous things; righteous anger now (and, perhaps, violence) is simply intended to reduce the measure of my current “futures” where Roko kills me.
On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them.
Like Sly, I don’t put much value in “versions” of me I can’t interact with. (The “much” is there because, of course, I don’t know with 100% certainty how the universe works, so I can’t be 100% sure what I can interact with.) But my “future selves” are in a kind of interaction with me: what I do influences which of those future selves I’ll become. The value assigned to them is akin to the value someone in free-fall assigns to the rigidity of the surface below them: they aren’t angry because (say) the pavement is hard, in itself; they are angry because it implies a squishy future for themselves. On the other hand, they really don’t care about the surface they’ve fallen from.
With this in mind it seems that you treat a qGrenade in exactly the same way you would treat a pseudo-random grenade. You don’t care whether the probability was quantum or just ‘unknown’. My reasoning may be very slightly different but in this regard we are in agreement.
Yep. Grenades in MY past are always duds, otherwise I wouldn’t be here to talk about them. It doesn’t matter if they were fake, or malfunctioned, or had a pseudorandom or quantum probability to blow up. Past throwers of grenades are only relevant in the sense that they are evidence of future grenade-throwing.
Grenades in my future are those that I’m concerned about. With regards to people intending to throw grenades at me, the only distinction is how sure I am I’ll live; even something deterministic but hard to compute (for me) I consider a risk, and I’d be angry with the presumptive thrower.
(A fine point: I would be less angry to find out that someone who threw a grenade at me knew it wouldn’t blow, even if I didn’t know it at the time. I’d still be pissed, though.)
I would still kill them, even if I knew they were now completely reformed or impotent. If convenient, I’d beat them to death with a single box just to ram the point home.
Re: “In MWI one maximizes the fraction of future selves experiencing good outcomes.”
Note that the MWI is physics—not morality, though.
You are right, I should have said something like “implementing MWI over some morality.”
Is there something wrong with the parent beyond perhaps being slightly awkward in expression?
Tim seems to be pointing out that MWI itself doesn’t say anything about maximising nor anything about what you should try to maximise. This corrects a misleading claim in the quote.
(Upvoted back to 0 with explanation).
In MWI, one can do nothing about the proportion of future selves experiencing good outcomes that would not have happened anyway.