I actually think I can convince you, because I think if something does go wrong with this, it will not be that argument.
I also will write more when I have time.
I do not think I am being a “physics racist” I am measuring all worlds in the multiverse equally. However, there are an infinite number of them, and in order to do that I have to choose a measure. I am choosing the K-measure, because it is the most natural to me, and honestly feels to me like the thing that is closest I can get to “measuring all worlds equally.” Just saying “Uniform distribution” does not mean anything.
I believe there is no objective measure on the multiverse, so I am putting a subjective measure on it. If there was a proof that there was an objective measure, and that I should value according to my measure, I would update my subjective measure to be that. I would not double count.
Basically, I think that I care according to K-complexity because that is the nicest measure I can think to put on the multiverse.
True, now that I think of it, there are more things that could go wrong with this as well. I’m glad I found LW where people are interested in talking about this stuff. (Two years ago I hadn’t heard of LW)
So, I think the essence of your point is in the following sentence:
Secondly, I’m willing to bet that if someone were to come along and prove to you that there was an objective measure over the multiverse, and it did favor simplicity in the way that we want it to, you would rejoice and go back to valuing each world equally.
It seems that if I have my caring measure, and there is also an objective “reality fluid” measure that they should stack and cause me to care “twice” as much about simplicity.
However, my caring measure is just my subjective assignment to how important each world is. If I learned that there was an objective assignment, that would trump my subjective assignment. It is not like there are two variables, subjective weight and objective weight. THere is one variable, weight, and it can also get a subjective or objective flag.
It is similar to objective and subjective morality. If I had a code of morality that I thought was subjective, and learned that was actually an objective morality that happened to be exactly the same, I would continue following it the same way. The only difference is that I might expect others to follow it more.
I do not know what the correct measure to put on the multiverse. I believe that there is no correct measure. I therefore have to put my own on. The measure that I put on is the one that feels “uniform” to me. If I learned that there is a correct measure, my intuition about what is “uniform” would change with it.
I think that is part of my point, but my main point was that many theories can receive this treatment.
For example, suppose you believe in a Big World such that every physically possible thing happens somewhere/somewhen in it.
And suppose you believe that there is a teapot in orbit between Mars and Jupiter.
Couldn’t you “prove” your belief by saying “what is probability anyway,” pointing out that there are infinitely many copies of you which live in solar systems with teapots between Mars and Jupiter, and saying that you value those copies more than everyone else? Not because you value any one person more than any other, of course—you value everybody equally—but because of the measure you assign over all copies of you in the Big World.
Do you think there is a principled difference between the scenario I just described, and what you are doing with Measureless Multiverse theory? If you say no, you aren’t sunk—after all, perhaps MMtheory is more plausible for other reasons than the Big World theory I described.
My answer is no, at least objectively. There is a little caveat here that is related to Eliezer’s theory of meta ethics. It is exactly the same as the way I say no, there is no principled reason why killing is bad. From my point of view, killing really is bad, and the fact that I think it is bad is not what causes it to be bad. Similarly, from my point of view simple things are more important, and If I were to change my mind about that, they would not stop being more important.
Okay. Well, this seems to me to be a bad mark against Measureless Multiverse theory.
If it can only be made to add up to normality by pulling a move that could equally well be used to make pretty much any arbitrary belief system add up to normality… then the fact that it adds up to normality is not something that counts in favor of the theory.
Perhaps you say, fair enough—there are plenty of other things which count in favor of the theory. But I worry. This move makes adding up to normality a cheap, plentiful feature that many many theories share, and that seems dangerous.
Suppose our mathematical abilities advance to the point where we can take measures/languages and calculate the predictions they make, at least to approximation or something. It might turn out that society is split on which simplicity prior to use, and thus society is split about which predictions to make in some big hypothetical experiment. (I’m imagining a big collider.) Under MMtheory, this would just be an ethical disagreement, one that in fact would not be resolved, or influenced in any way, by performing the experiment. The people who turned out to be “wrong” would simply say “Oh, so I guess I’m in a more complicated world after all. But this doesn’t conflict with my predictions, since I didn’t make any predictions.”
What do you think about this issue? Do you think I made a mistake somewhere?
EDIT: Or was I massively unclear? Rereading, I think that might be the case. I’d be happy to rewrite if you like, but since I’m busy now I’ll just hope that it is comprehensible to you.
I’m not sure what to think about your defense here. I think that it probably wouldn’t work if we were talking about valuing people/worlds directly instead of assigning a measure over the space of worlds.
I actually think I can convince you, because I think if something does go wrong with this, it will not be that argument.
I also will write more when I have time.
I do not think I am being a “physics racist” I am measuring all worlds in the multiverse equally. However, there are an infinite number of them, and in order to do that I have to choose a measure. I am choosing the K-measure, because it is the most natural to me, and honestly feels to me like the thing that is closest I can get to “measuring all worlds equally.” Just saying “Uniform distribution” does not mean anything.
I believe there is no objective measure on the multiverse, so I am putting a subjective measure on it. If there was a proof that there was an objective measure, and that I should value according to my measure, I would update my subjective measure to be that. I would not double count.
Basically, I think that I care according to K-complexity because that is the nicest measure I can think to put on the multiverse.
Good point. I look forward to hearing more.
True, now that I think of it, there are more things that could go wrong with this as well. I’m glad I found LW where people are interested in talking about this stuff. (Two years ago I hadn’t heard of LW)
So, I think the essence of your point is in the following sentence:
It seems that if I have my caring measure, and there is also an objective “reality fluid” measure that they should stack and cause me to care “twice” as much about simplicity.
However, my caring measure is just my subjective assignment to how important each world is. If I learned that there was an objective assignment, that would trump my subjective assignment. It is not like there are two variables, subjective weight and objective weight. THere is one variable, weight, and it can also get a subjective or objective flag.
It is similar to objective and subjective morality. If I had a code of morality that I thought was subjective, and learned that was actually an objective morality that happened to be exactly the same, I would continue following it the same way. The only difference is that I might expect others to follow it more.
I do not know what the correct measure to put on the multiverse. I believe that there is no correct measure. I therefore have to put my own on. The measure that I put on is the one that feels “uniform” to me. If I learned that there is a correct measure, my intuition about what is “uniform” would change with it.
I think that is part of my point, but my main point was that many theories can receive this treatment.
For example, suppose you believe in a Big World such that every physically possible thing happens somewhere/somewhen in it.
And suppose you believe that there is a teapot in orbit between Mars and Jupiter.
Couldn’t you “prove” your belief by saying “what is probability anyway,” pointing out that there are infinitely many copies of you which live in solar systems with teapots between Mars and Jupiter, and saying that you value those copies more than everyone else? Not because you value any one person more than any other, of course—you value everybody equally—but because of the measure you assign over all copies of you in the Big World.
Do you think there is a principled difference between the scenario I just described, and what you are doing with Measureless Multiverse theory? If you say no, you aren’t sunk—after all, perhaps MMtheory is more plausible for other reasons than the Big World theory I described.
My answer is no, at least objectively. There is a little caveat here that is related to Eliezer’s theory of meta ethics. It is exactly the same as the way I say no, there is no principled reason why killing is bad. From my point of view, killing really is bad, and the fact that I think it is bad is not what causes it to be bad. Similarly, from my point of view simple things are more important, and If I were to change my mind about that, they would not stop being more important.
Okay. Well, this seems to me to be a bad mark against Measureless Multiverse theory.
If it can only be made to add up to normality by pulling a move that could equally well be used to make pretty much any arbitrary belief system add up to normality… then the fact that it adds up to normality is not something that counts in favor of the theory. Perhaps you say, fair enough—there are plenty of other things which count in favor of the theory. But I worry. This move makes adding up to normality a cheap, plentiful feature that many many theories share, and that seems dangerous.
Suppose our mathematical abilities advance to the point where we can take measures/languages and calculate the predictions they make, at least to approximation or something. It might turn out that society is split on which simplicity prior to use, and thus society is split about which predictions to make in some big hypothetical experiment. (I’m imagining a big collider.) Under MMtheory, this would just be an ethical disagreement, one that in fact would not be resolved, or influenced in any way, by performing the experiment. The people who turned out to be “wrong” would simply say “Oh, so I guess I’m in a more complicated world after all. But this doesn’t conflict with my predictions, since I didn’t make any predictions.”
What do you think about this issue? Do you think I made a mistake somewhere?
EDIT: Or was I massively unclear? Rereading, I think that might be the case. I’d be happy to rewrite if you like, but since I’m busy now I’ll just hope that it is comprehensible to you.
I’m not sure what to think about your defense here. I think that it probably wouldn’t work if we were talking about valuing people/worlds directly instead of assigning a measure over the space of worlds.