No. I’m saying that if there’s (say) a 50% chance that MWI is true, then you can ignore the possibility that it isn’t; unless your decision theory somehow normalizes for the total quantity of people.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes. If Omega or Matrix Lord threatens to steal $5 from 3^^^3 people in individual, non-MWI sim-worlds, then that would … well, of course, not actually balance things out, because there’s a huge handwavy error in the exponent here, so one or the other is going to massively dominate, but you’d have to actually do some heavy calculation to try to figure out which side it is.
If there’s an ordinary mugger, then you have MWI going on (or not) independently of how you choose to respond, so it cancels out, and you can treat it as just a single instance.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes.
But if Pascal’s Mugger decides to torture 3^^^3 people because you kept $5, he also does this in “something like 3^^^3 or a googleplex or something” universes. In other words, I don’t see why it doesn’t always cancel out.
I explicitly said that mugger stealing $5 happens “in individual, non-MWI sim-worlds”. I believe that a given deterministic algorithm, even if it happens to be running in 3^^^3 identical copies, counts as an individual world. You can stir in quantum noise explicitly, which effectively becomes part of the algorithm and thus splits it into many separate sims each with its own unique noise; but you can’t do that nearly fast enough to keep up with the quantum noise that’s being stirred into real physical humans.
Philosophy questions of what counts as a world aside, who told you that the mugger is running some algorithm (deterministic or otherwise)? How do you know the mugger doesn’t simply have 3^^^3 physical people stashed away somewhere, ready to torture, and prone to all the quantum branching that entails? How do you know you’re not just confused about the implications of quantum noise?
If there’s even a 1-in-a-googolplex chance you’re wrong about these things, then the disutility of the mugger’s threat is still proportional to the 3^^^3-tortured-people, just divided by a mere googolplex (I will be generous and say that if we assume you’re right, the disutility of the mugger’s threat is effectively zero). That still dominates every calculation you could make...
...and even if it didn’t, the mugger could just threaten 3^^^^^^^3 people instead. Any counter-argument that remains valid has to scale with the number of people threatened. Your argument does not so scale.
At this point, we’re mostly both working with different implicitly-modified versions of the original problem, and so if we really wanted to get anywhere we’d have to be a lot more specific.
My original point was that a factor of MWI in the original problem might be non-negligible, and should have been considered. I am acting as the Devil’s Concern Troll, a position which I claim is useful even though it bears a pretty low burden of proof. I do not deny that there are gaping holes in my argument as it relates to this post (though I think I am on significantly firmer ground if you were facing Galaxy Of Computronium Woman rather than Matrix Lord). But I think that if you look at what you yourself are arguing with the same skeptical eye, you’ll see that it is far from bulletproof.
Admit it: when you read my objection, you knew the conclusion (I am wrong) before you’d fully constructed the argument. That kind of goal-directed thinking is irreplaceable for bridging large gaps. But when it leads you to dismiss factors of 3^^^3 or a googolplex as petty matters, that’s mighty dangerous territory.
For instance, if MWI means someone like you is legion, and the anthropic argument means you are more likely to be that someone rather than a non-MWI simulated pseudo-copy thereof, then you do have a pertinent question to ask the Matrix Lord: “You’re asking me to give you $5, but what if some copies of me do and others don’t?” If it answers, for instance, “I’ve turned off MWI for the duration of this challenge”, then the anthropic improbability of the situation just skyrocketed; not by anything like enough to outweigh the 3^^^^3 threat, but easily by enough to outweigh the improbability that you’re just hallucinating this (or that you’re just a figment of the imagination of the Matrix Lord as it idly considers whether to pose this problem for real, to the real you).
Again: if you look for the weakest, or worse, the most poorly-expressed part of what I’m saying, you can easily knock it down. But it’s better if you steel-man it; I don’t see where the correct response could possibly be “Factor of 3^^^3? Hadn’t considered that exactly, but it’s probably irrelevant, let’s see how.”
On an even more general level, my larger point is that I find that multiplicity (both MWI and Tegmark level 4) is a fruitful inspiration for morals and decision theory; more fruitful, in my experience, than simulations, Omega, Matrix Lords, and GOCW. Note that MWI and TL4, like Omega and GOCW, don’t have to be true or falsifiable in order to be useful as inspiration. My experience includes thinking about these matters more than most, but certainly less than people like Eliezer. Take that as you will.
No. I’m saying that if there’s (say) a 50% chance that MWI is true, then you can ignore the possibility that it isn’t; unless your decision theory somehow normalizes for the total quantity of people.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes. If Omega or Matrix Lord threatens to steal $5 from 3^^^3 people in individual, non-MWI sim-worlds, then that would … well, of course, not actually balance things out, because there’s a huge handwavy error in the exponent here, so one or the other is going to massively dominate, but you’d have to actually do some heavy calculation to try to figure out which side it is.
If there’s an ordinary mugger, then you have MWI going on (or not) independently of how you choose to respond, so it cancels out, and you can treat it as just a single instance.
But if Pascal’s Mugger decides to torture 3^^^3 people because you kept $5, he also does this in “something like 3^^^3 or a googleplex or something” universes. In other words, I don’t see why it doesn’t always cancel out.
I explicitly said that mugger stealing $5 happens “in individual, non-MWI sim-worlds”. I believe that a given deterministic algorithm, even if it happens to be running in 3^^^3 identical copies, counts as an individual world. You can stir in quantum noise explicitly, which effectively becomes part of the algorithm and thus splits it into many separate sims each with its own unique noise; but you can’t do that nearly fast enough to keep up with the quantum noise that’s being stirred into real physical humans.
Philosophy questions of what counts as a world aside, who told you that the mugger is running some algorithm (deterministic or otherwise)? How do you know the mugger doesn’t simply have 3^^^3 physical people stashed away somewhere, ready to torture, and prone to all the quantum branching that entails? How do you know you’re not just confused about the implications of quantum noise?
If there’s even a 1-in-a-googolplex chance you’re wrong about these things, then the disutility of the mugger’s threat is still proportional to the 3^^^3-tortured-people, just divided by a mere googolplex (I will be generous and say that if we assume you’re right, the disutility of the mugger’s threat is effectively zero). That still dominates every calculation you could make...
...and even if it didn’t, the mugger could just threaten 3^^^^^^^3 people instead. Any counter-argument that remains valid has to scale with the number of people threatened. Your argument does not so scale.
At this point, we’re mostly both working with different implicitly-modified versions of the original problem, and so if we really wanted to get anywhere we’d have to be a lot more specific.
My original point was that a factor of MWI in the original problem might be non-negligible, and should have been considered. I am acting as the Devil’s Concern Troll, a position which I claim is useful even though it bears a pretty low burden of proof. I do not deny that there are gaping holes in my argument as it relates to this post (though I think I am on significantly firmer ground if you were facing Galaxy Of Computronium Woman rather than Matrix Lord). But I think that if you look at what you yourself are arguing with the same skeptical eye, you’ll see that it is far from bulletproof.
Admit it: when you read my objection, you knew the conclusion (I am wrong) before you’d fully constructed the argument. That kind of goal-directed thinking is irreplaceable for bridging large gaps. But when it leads you to dismiss factors of 3^^^3 or a googolplex as petty matters, that’s mighty dangerous territory.
For instance, if MWI means someone like you is legion, and the anthropic argument means you are more likely to be that someone rather than a non-MWI simulated pseudo-copy thereof, then you do have a pertinent question to ask the Matrix Lord: “You’re asking me to give you $5, but what if some copies of me do and others don’t?” If it answers, for instance, “I’ve turned off MWI for the duration of this challenge”, then the anthropic improbability of the situation just skyrocketed; not by anything like enough to outweigh the 3^^^^3 threat, but easily by enough to outweigh the improbability that you’re just hallucinating this (or that you’re just a figment of the imagination of the Matrix Lord as it idly considers whether to pose this problem for real, to the real you).
Again: if you look for the weakest, or worse, the most poorly-expressed part of what I’m saying, you can easily knock it down. But it’s better if you steel-man it; I don’t see where the correct response could possibly be “Factor of 3^^^3? Hadn’t considered that exactly, but it’s probably irrelevant, let’s see how.”
On an even more general level, my larger point is that I find that multiplicity (both MWI and Tegmark level 4) is a fruitful inspiration for morals and decision theory; more fruitful, in my experience, than simulations, Omega, Matrix Lords, and GOCW. Note that MWI and TL4, like Omega and GOCW, don’t have to be true or falsifiable in order to be useful as inspiration. My experience includes thinking about these matters more than most, but certainly less than people like Eliezer. Take that as you will.
I think we’re talking past each other, and future discussion will not be productive, so I’m tapping out now.