(This comment was originally written in response to shminux below, but it’s more directly addressing nshepperd’s point, so I’m moving it to here)
I understand that you’re arguing that a good decision theory should not rely on MWI. I accept that if you can build one without that reliance, you should; and, in that case, MWI is a red herring here.
But what if you can’t make a good decision theory that works the same with or without MWI? I think that in that case there are anthropic reasons that we should privilege MWI. That is:
The fact that the universe apparently exists, and is apparently consistent with MWI, seems to indicate that an MWI universe is at least possible.
If this universe happens to be “smaller than MWI” for some reason (for instance, we discover a better theory tomorrow; or, we’re actually inside a sim that’s faking it somehow), there is some probability that “MWI or larger” does actually exist somewhere else. (You can motivate this by various kinds of handwaving: from Tegmark-Level-4 philosophizing; to the question of how a smaller-than-MWI simulator could have decided that a pseudo-MWI sim would be interesting; and probably other arguments).
If intelligence exists in both “smaller than MWI” domains and “MWI or larger” domains, anthropic arguments strongly suggest that we should assume we’re in one of the latter.
(And to summarize, in direct response to nshepperd:)
That’s probably true. But it’s not a good excuse to ignore how things would change if you are in an MWI world, as we seem to be.
If your decision theory doesn’t work independently of whether MWI is true or not, then what do you use to decide if MWI is true?
And if your decision theory does allow for both possibilities (and even if MWI somehow solved Pascal’s Mugging, which I also disagree with) then you would still only win if you assign somewhere around 1 in 3^^^3 probability to MWI being false. On what grounds could you possibly make such a claim?
I’m not saying I have a decision theory at all. I’m saying that whatever your decision theory, MWI being true or not could in principle change the answers it gives.
And if there is some chance that MWI is true, and some chance that it is false, the MWI possibilities have a factor of ~3^^^3 in them. They dominate even if the chance of MWI is small, and far more so if the chance of it being false is small.
No. I’m saying that if there’s (say) a 50% chance that MWI is true, then you can ignore the possibility that it isn’t; unless your decision theory somehow normalizes for the total quantity of people.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes. If Omega or Matrix Lord threatens to steal $5 from 3^^^3 people in individual, non-MWI sim-worlds, then that would … well, of course, not actually balance things out, because there’s a huge handwavy error in the exponent here, so one or the other is going to massively dominate, but you’d have to actually do some heavy calculation to try to figure out which side it is.
If there’s an ordinary mugger, then you have MWI going on (or not) independently of how you choose to respond, so it cancels out, and you can treat it as just a single instance.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes.
But if Pascal’s Mugger decides to torture 3^^^3 people because you kept $5, he also does this in “something like 3^^^3 or a googleplex or something” universes. In other words, I don’t see why it doesn’t always cancel out.
I explicitly said that mugger stealing $5 happens “in individual, non-MWI sim-worlds”. I believe that a given deterministic algorithm, even if it happens to be running in 3^^^3 identical copies, counts as an individual world. You can stir in quantum noise explicitly, which effectively becomes part of the algorithm and thus splits it into many separate sims each with its own unique noise; but you can’t do that nearly fast enough to keep up with the quantum noise that’s being stirred into real physical humans.
Philosophy questions of what counts as a world aside, who told you that the mugger is running some algorithm (deterministic or otherwise)? How do you know the mugger doesn’t simply have 3^^^3 physical people stashed away somewhere, ready to torture, and prone to all the quantum branching that entails? How do you know you’re not just confused about the implications of quantum noise?
If there’s even a 1-in-a-googolplex chance you’re wrong about these things, then the disutility of the mugger’s threat is still proportional to the 3^^^3-tortured-people, just divided by a mere googolplex (I will be generous and say that if we assume you’re right, the disutility of the mugger’s threat is effectively zero). That still dominates every calculation you could make...
...and even if it didn’t, the mugger could just threaten 3^^^^^^^3 people instead. Any counter-argument that remains valid has to scale with the number of people threatened. Your argument does not so scale.
At this point, we’re mostly both working with different implicitly-modified versions of the original problem, and so if we really wanted to get anywhere we’d have to be a lot more specific.
My original point was that a factor of MWI in the original problem might be non-negligible, and should have been considered. I am acting as the Devil’s Concern Troll, a position which I claim is useful even though it bears a pretty low burden of proof. I do not deny that there are gaping holes in my argument as it relates to this post (though I think I am on significantly firmer ground if you were facing Galaxy Of Computronium Woman rather than Matrix Lord). But I think that if you look at what you yourself are arguing with the same skeptical eye, you’ll see that it is far from bulletproof.
Admit it: when you read my objection, you knew the conclusion (I am wrong) before you’d fully constructed the argument. That kind of goal-directed thinking is irreplaceable for bridging large gaps. But when it leads you to dismiss factors of 3^^^3 or a googolplex as petty matters, that’s mighty dangerous territory.
For instance, if MWI means someone like you is legion, and the anthropic argument means you are more likely to be that someone rather than a non-MWI simulated pseudo-copy thereof, then you do have a pertinent question to ask the Matrix Lord: “You’re asking me to give you $5, but what if some copies of me do and others don’t?” If it answers, for instance, “I’ve turned off MWI for the duration of this challenge”, then the anthropic improbability of the situation just skyrocketed; not by anything like enough to outweigh the 3^^^^3 threat, but easily by enough to outweigh the improbability that you’re just hallucinating this (or that you’re just a figment of the imagination of the Matrix Lord as it idly considers whether to pose this problem for real, to the real you).
Again: if you look for the weakest, or worse, the most poorly-expressed part of what I’m saying, you can easily knock it down. But it’s better if you steel-man it; I don’t see where the correct response could possibly be “Factor of 3^^^3? Hadn’t considered that exactly, but it’s probably irrelevant, let’s see how.”
On an even more general level, my larger point is that I find that multiplicity (both MWI and Tegmark level 4) is a fruitful inspiration for morals and decision theory; more fruitful, in my experience, than simulations, Omega, Matrix Lords, and GOCW. Note that MWI and TL4, like Omega and GOCW, don’t have to be true or falsifiable in order to be useful as inspiration. My experience includes thinking about these matters more than most, but certainly less than people like Eliezer. Take that as you will.
But what if you can’t make a good decision theory that works the same with or without MWI?
This contradicts the premise that MWI is untestable experimentally, and is only a Bayesian necessity, the point of view Eliezer seems to hold. Indeed, if an MWI-based DT suggests a different course of action than a single-world one, then you can test the accuracy of each and find out whether MWI is a good model of this world. If furthermore one can show that no single-world DT is as accurate as a many-world one, I will be convinced.
The fact that the universe apparently exists, and is apparently consistent with MWI, seems to indicate that an MWI universe is at least possible.
It is also consistent with Christianity and invisible pink unicorns, why do you prefer to be MWI-mugged rather than Christ-mugged or unicorn-mugged?
This contradicts the premise that MWI is untestable experimentally
No it doesn’t. DT is about what you should do, especially when we’re invoking Omega and Matrix Lords and the like. Which DT is better is not empirically testable.
It is also consistent with Christianity and invisible pink unicorns
Yes, except that MWI is the best theory currently available to explain mountains of experimental evidence, while Christianity is empirically disproven (“Look, wine, not blood!”) and invisible pink unicorns (and invisible, pink versions of Christianity) are incoherent and unfalsifiable.
(Later edit: “best theory currently available to explain mountains of experimental evidence” describes QM in general, not MWI. I have a hard time imagining a version of QM that doesn’t include some form of MWI, though, as shminux points out downthread, the details are far from being settled. Certainly I don’t think that there’s a lot to be gained by comparing MWI to invisible pink unicorns. Both have a p value that is neither 0 nor 1, but the similarity pretty much ends there.)
Re MWI: My understanding of QM is quite good for someone who has never done the actual math. I realize that there are others whose understanding is vastly better. However, this debate is not about the equations of QM per se, but about the measure theory that tells you how “real” the different parts of them are. That is also an area where I’m no more than an advanced amateur, but it is also an area in which nobody in this discussion has the hallmarks of an expert. Which is why we’re using terms like “reality fluid”.
My understanding of QM is quite good for someone who has never done the actual math
And my violin skills are quite good for someone who has never done the actual playing.
However, this debate is not about the equations of QM per se, but about the measure theory that tells you how “real” the different parts of them are.
Different parts of what? Of equations? They are all equally real: together they form mathematical models necessary to describe observed data.
Which is why we’re using terms like “reality fluid”.
Eliezer is probably the only one who uses that and the full term is “magical reality fluid” or something similar, named this way specifically to remind him that he is confused about it.
I have a related degree, if that’s what you are asking.
ψ
I’m yet to see anyone writing down anything more than a handwaving of this in MWI. Zurek’s ideas of einselection and envariance go some ways toward showing why only the eigenstates survive when decoherence happens, and there is some experimental support for this, though the issue is far from settled.
Precisely; the issue is far from settled. That clearly doesn’t mean “any handwavy speculation is as good as any other” but it also doesn’t mean “speculation can be dismissed out of hand because we already understand this and you’re just wrong”.
(This comment was originally written in response to shminux below, but it’s more directly addressing nshepperd’s point, so I’m moving it to here)
I understand that you’re arguing that a good decision theory should not rely on MWI. I accept that if you can build one without that reliance, you should; and, in that case, MWI is a red herring here.
But what if you can’t make a good decision theory that works the same with or without MWI? I think that in that case there are anthropic reasons that we should privilege MWI. That is:
The fact that the universe apparently exists, and is apparently consistent with MWI, seems to indicate that an MWI universe is at least possible.
If this universe happens to be “smaller than MWI” for some reason (for instance, we discover a better theory tomorrow; or, we’re actually inside a sim that’s faking it somehow), there is some probability that “MWI or larger” does actually exist somewhere else. (You can motivate this by various kinds of handwaving: from Tegmark-Level-4 philosophizing; to the question of how a smaller-than-MWI simulator could have decided that a pseudo-MWI sim would be interesting; and probably other arguments).
If intelligence exists in both “smaller than MWI” domains and “MWI or larger” domains, anthropic arguments strongly suggest that we should assume we’re in one of the latter.
(And to summarize, in direct response to nshepperd:)
That’s probably true. But it’s not a good excuse to ignore how things would change if you are in an MWI world, as we seem to be.
If your decision theory doesn’t work independently of whether MWI is true or not, then what do you use to decide if MWI is true?
And if your decision theory does allow for both possibilities (and even if MWI somehow solved Pascal’s Mugging, which I also disagree with) then you would still only win if you assign somewhere around 1 in 3^^^3 probability to MWI being false. On what grounds could you possibly make such a claim?
I’m not saying I have a decision theory at all. I’m saying that whatever your decision theory, MWI being true or not could in principle change the answers it gives.
And if there is some chance that MWI is true, and some chance that it is false, the MWI possibilities have a factor of ~3^^^3 in them. They dominate even if the chance of MWI is small, and far more so if the chance of it being false is small.
Wait, so you’re saying that if MWI is true, then keeping $5 is not only as good as, but outweighs saving 3^^^3 lives by a huge factor?
Does this also apply to regular muggers? You know, the gun-in-the-street, your-money-or-your-life kind? If not, what’s the difference?
No. I’m saying that if there’s (say) a 50% chance that MWI is true, then you can ignore the possibility that it isn’t; unless your decision theory somehow normalizes for the total quantity of people.
If you’ve decided MWI is true, and that measure is not conserved (ie, as the universe splits, there’s more total reality fluid to go around), then keeping $5 means keeping $5 in something like 3^^^3 or a googleplex or something universes. If Omega or Matrix Lord threatens to steal $5 from 3^^^3 people in individual, non-MWI sim-worlds, then that would … well, of course, not actually balance things out, because there’s a huge handwavy error in the exponent here, so one or the other is going to massively dominate, but you’d have to actually do some heavy calculation to try to figure out which side it is.
If there’s an ordinary mugger, then you have MWI going on (or not) independently of how you choose to respond, so it cancels out, and you can treat it as just a single instance.
But if Pascal’s Mugger decides to torture 3^^^3 people because you kept $5, he also does this in “something like 3^^^3 or a googleplex or something” universes. In other words, I don’t see why it doesn’t always cancel out.
I explicitly said that mugger stealing $5 happens “in individual, non-MWI sim-worlds”. I believe that a given deterministic algorithm, even if it happens to be running in 3^^^3 identical copies, counts as an individual world. You can stir in quantum noise explicitly, which effectively becomes part of the algorithm and thus splits it into many separate sims each with its own unique noise; but you can’t do that nearly fast enough to keep up with the quantum noise that’s being stirred into real physical humans.
Philosophy questions of what counts as a world aside, who told you that the mugger is running some algorithm (deterministic or otherwise)? How do you know the mugger doesn’t simply have 3^^^3 physical people stashed away somewhere, ready to torture, and prone to all the quantum branching that entails? How do you know you’re not just confused about the implications of quantum noise?
If there’s even a 1-in-a-googolplex chance you’re wrong about these things, then the disutility of the mugger’s threat is still proportional to the 3^^^3-tortured-people, just divided by a mere googolplex (I will be generous and say that if we assume you’re right, the disutility of the mugger’s threat is effectively zero). That still dominates every calculation you could make...
...and even if it didn’t, the mugger could just threaten 3^^^^^^^3 people instead. Any counter-argument that remains valid has to scale with the number of people threatened. Your argument does not so scale.
At this point, we’re mostly both working with different implicitly-modified versions of the original problem, and so if we really wanted to get anywhere we’d have to be a lot more specific.
My original point was that a factor of MWI in the original problem might be non-negligible, and should have been considered. I am acting as the Devil’s Concern Troll, a position which I claim is useful even though it bears a pretty low burden of proof. I do not deny that there are gaping holes in my argument as it relates to this post (though I think I am on significantly firmer ground if you were facing Galaxy Of Computronium Woman rather than Matrix Lord). But I think that if you look at what you yourself are arguing with the same skeptical eye, you’ll see that it is far from bulletproof.
Admit it: when you read my objection, you knew the conclusion (I am wrong) before you’d fully constructed the argument. That kind of goal-directed thinking is irreplaceable for bridging large gaps. But when it leads you to dismiss factors of 3^^^3 or a googolplex as petty matters, that’s mighty dangerous territory.
For instance, if MWI means someone like you is legion, and the anthropic argument means you are more likely to be that someone rather than a non-MWI simulated pseudo-copy thereof, then you do have a pertinent question to ask the Matrix Lord: “You’re asking me to give you $5, but what if some copies of me do and others don’t?” If it answers, for instance, “I’ve turned off MWI for the duration of this challenge”, then the anthropic improbability of the situation just skyrocketed; not by anything like enough to outweigh the 3^^^^3 threat, but easily by enough to outweigh the improbability that you’re just hallucinating this (or that you’re just a figment of the imagination of the Matrix Lord as it idly considers whether to pose this problem for real, to the real you).
Again: if you look for the weakest, or worse, the most poorly-expressed part of what I’m saying, you can easily knock it down. But it’s better if you steel-man it; I don’t see where the correct response could possibly be “Factor of 3^^^3? Hadn’t considered that exactly, but it’s probably irrelevant, let’s see how.”
On an even more general level, my larger point is that I find that multiplicity (both MWI and Tegmark level 4) is a fruitful inspiration for morals and decision theory; more fruitful, in my experience, than simulations, Omega, Matrix Lords, and GOCW. Note that MWI and TL4, like Omega and GOCW, don’t have to be true or falsifiable in order to be useful as inspiration. My experience includes thinking about these matters more than most, but certainly less than people like Eliezer. Take that as you will.
I think we’re talking past each other, and future discussion will not be productive, so I’m tapping out now.
(Moved my reply, too)
This contradicts the premise that MWI is untestable experimentally, and is only a Bayesian necessity, the point of view Eliezer seems to hold. Indeed, if an MWI-based DT suggests a different course of action than a single-world one, then you can test the accuracy of each and find out whether MWI is a good model of this world. If furthermore one can show that no single-world DT is as accurate as a many-world one, I will be convinced.
It is also consistent with Christianity and invisible pink unicorns, why do you prefer to be MWI-mugged rather than Christ-mugged or unicorn-mugged?
No it doesn’t. DT is about what you should do, especially when we’re invoking Omega and Matrix Lords and the like. Which DT is better is not empirically testable.
Yes, except that MWI is the best theory currently available to explain mountains of experimental evidence, while Christianity is empirically disproven (“Look, wine, not blood!”) and invisible pink unicorns (and invisible, pink versions of Christianity) are incoherent and unfalsifiable.
(Later edit: “best theory currently available to explain mountains of experimental evidence” describes QM in general, not MWI. I have a hard time imagining a version of QM that doesn’t include some form of MWI, though, as shminux points out downthread, the details are far from being settled. Certainly I don’t think that there’s a lot to be gained by comparing MWI to invisible pink unicorns. Both have a p value that is neither 0 nor 1, but the similarity pretty much ends there.)
You ought to notice your confusion by now.
What is your level of understanding QM? Consider reading this post.
Re DT: OK, I notice I am confused.
Re MWI: My understanding of QM is quite good for someone who has never done the actual math. I realize that there are others whose understanding is vastly better. However, this debate is not about the equations of QM per se, but about the measure theory that tells you how “real” the different parts of them are. That is also an area where I’m no more than an advanced amateur, but it is also an area in which nobody in this discussion has the hallmarks of an expert. Which is why we’re using terms like “reality fluid”.
And my violin skills are quite good for someone who has never done the actual playing.
Different parts of what? Of equations? They are all equally real: together they form mathematical models necessary to describe observed data.
Eliezer is probably the only one who uses that and the full term is “magical reality fluid” or something similar, named this way specifically to remind him that he is confused about it.
I have actually done the math for simple toy cases like Bell’s inequality. But yeah, you’re right, I’m no expert.
(Out of curiousity, are you?)
ψ
I have a related degree, if that’s what you are asking.
I’m yet to see anyone writing down anything more than a handwaving of this in MWI. Zurek’s ideas of einselection and envariance go some ways toward showing why only the eigenstates survive when decoherence happens, and there is some experimental support for this, though the issue is far from settled.
Precisely; the issue is far from settled. That clearly doesn’t mean “any handwavy speculation is as good as any other” but it also doesn’t mean “speculation can be dismissed out of hand because we already understand this and you’re just wrong”.