Two fluid model of anthropics. The two different fluids are “probability” and “anthropic measure.” Probabilities come from your information, and thus you can manipulate your probability by manipulating your information (e.g. by knowing you’ll make more copies of yourself on the beach). Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
Thus a paradox. Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure). Promising to copy yourself later doesn’t actually change how much magic reality fluid the you sitting there in the cold has, so it doesn’t “really” do anything.
I think the conflict dissolves if you actually try to use your anticipation to do something useful.
Example. Suppose you can either push button A (before 11AM) so that if you’re still in the room you get a small happiness reward, or you can push button B so if you’re transported to paradise you get a happiness reward. If you value happiness in all your copies equally, you should push button B, which means that you “anticipate” being transported to paradise.
This gets a little weird with the clones and to what extent you should care about them, but there’s an analogous situation where I think the anthropic measure solution is clearly more intuitive: death. Suppose the many worlds interpretation is true and you set up a situation so that you die in 99% of worlds. Then should you “anticipate” death, or anticipate surviving? Anticipating death seems like the right thing. A hedonist should not be willing to sacrifice 1 unit of pleasure before quantum suicide in order to gain 10 units on the off chance that they survive.
So I think that one’s anticipation of the future should not be a probability distribution over sensory input sequences (which sums to 1), but rather a finite non-negative distribution) (which sums to a non-negative real number).
Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure).
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
The same way you’d make such guesses normally—observe the world, build an implicit model, make interpretations etc. “How” is not really an additional problem, so perhaps you’d like examples and motivation.
Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don’t—you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary.
However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you’ll get cake and 50% that you won’t, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes—instead a simulation is started in the same universe that also gets 1 unit of measure!
If all we cared about in decision-making was probabilities, we’d treat these two cases the same (e.g. you’d pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other.
It’s also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
I mean something a bit more complicated—that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge.
There are many kinks in what a better system would actually be, and hopefully I’ll eventually work out some kinks and write up a post.
Two fluid model of anthropics. The two different fluids are “probability” and “anthropic measure.” Probabilities come from your information, and thus you can manipulate your probability by manipulating your information (e.g. by knowing you’ll make more copies of yourself on the beach). Anthropic measure (magic reality fluid) measures what the reality is—it’s like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.
Thus a paradox. Even though you can make yourself expect (probability) to see a beach soon, it doesn’t change the fact that you actually still have to sit through the cold (anthropic measure). Promising to copy yourself later doesn’t actually change how much magic reality fluid the you sitting there in the cold has, so it doesn’t “really” do anything.
I like your general approach, but I find the two fluid model a confusing way of describing your idea
I think the conflict dissolves if you actually try to use your anticipation to do something useful.
Example. Suppose you can either push button A (before 11AM) so that if you’re still in the room you get a small happiness reward, or you can push button B so if you’re transported to paradise you get a happiness reward. If you value happiness in all your copies equally, you should push button B, which means that you “anticipate” being transported to paradise.
This gets a little weird with the clones and to what extent you should care about them, but there’s an analogous situation where I think the anthropic measure solution is clearly more intuitive: death. Suppose the many worlds interpretation is true and you set up a situation so that you die in 99% of worlds. Then should you “anticipate” death, or anticipate surviving? Anticipating death seems like the right thing. A hedonist should not be willing to sacrifice 1 unit of pleasure before quantum suicide in order to gain 10 units on the off chance that they survive.
So I think that one’s anticipation of the future should not be a probability distribution over sensory input sequences (which sums to 1), but rather a finite non-negative distribution) (which sums to a non-negative real number).
It doesn’t look like a helpful notion and seems very tautological. How do I observe this anthropic measure—how can I make any guesses about what the outside observer would see?
Continuing—how do I know I’d still have to sit through the cold? Maybe I am in my simulated past—in hypothetical scenario it’s a very down-to-earth assumption.
Sorry, but above doesn’t clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn’t work for guessing whether one is or isn’t in a certain simulation, but I don’t know if that’s what you meant.
The same way you’d make such guesses normally—observe the world, build an implicit model, make interpretations etc. “How” is not really an additional problem, so perhaps you’d like examples and motivation.
Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don’t—you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary.
However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you’ll get cake and 50% that you won’t, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes—instead a simulation is started in the same universe that also gets 1 unit of measure!
If all we cared about in decision-making was probabilities, we’d treat these two cases the same (e.g. you’d pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other.
It’s also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often.
I mean something a bit more complicated—that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge.
There are many kinks in what a better system would actually be, and hopefully I’ll eventually work out some kinks and write up a post.