Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure).
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
The same way you’d make such guesses normally—observe the world, build an implicit model, make interpretations etc. “How” is not really an additional problem, so perhaps you’d like examples and motivation.
Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don’t—you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary.
However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you’ll get cake and 50% that you won’t, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes—instead a simulation is started in the same universe that also gets 1 unit of measure!
If all we cared about in decision-making was probabilities, we’d treat these two cases the same (e.g. you’d pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other.
It’s also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
I mean something a bit more complicated—that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge.
There are many kinks in what a better system would actually be, and hopefully I’ll eventually work out some kinks and write up a post.
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
The same way you’d make such guesses normally—observe the world, build an implicit model, make interpretations etc. “How” is not really an additional problem, so perhaps you’d like examples and motivation.
Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don’t—you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary.
However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you’ll get cake and 50% that you won’t, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes—instead a simulation is started in the same universe that also gets 1 unit of measure!
If all we cared about in decision-making was probabilities, we’d treat these two cases the same (e.g. you’d pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other.
It’s also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often.
I mean something a bit more complicated—that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge.
There are many kinks in what a better system would actually be, and hopefully I’ll eventually work out some kinks and write up a post.