All possible worlds are real, and probabilities represent how much I care about each world.
Could you elaborate on what it means to have a given amount of “care” about a world? For example, suppose that I assign (or ought to assign) probability 0.5 to a coin’s coming up heads. How do you translate this probability assignment into language involving amounts of care for worlds?
You care equally for your selves that see heads and your selves that see tails. If you don’t care what happens to you after you see heads, then you would assign probability one to tails. Of course, you’d be wrong in about half the worlds, but hey, no skin off your nose. You’re the one who sees tails. Those other guys … they don’t matter.
For example, caring about “living until tomorrow” does not normally mean assigning a zero probability to death in the interim. If anything that would tend to make you fearless—indifferent to whether you stepped in front of a bus or not—the very opposite of what we normally mean by “caring” about some outcome.
It seems like this “caring” could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the “heads” worlds even after I learned that I’m not in them. Wouldn’t I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?
I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.
Could you elaborate on what it means to have a given amount of “care” about a world? For example, suppose that I assign (or ought to assign) probability 0.5 to a coin’s coming up heads. How do you translate this probability assignment into language involving amounts of care for worlds?
You care equally for your selves that see heads and your selves that see tails. If you don’t care what happens to you after you see heads, then you would assign probability one to tails. Of course, you’d be wrong in about half the worlds, but hey, no skin off your nose. You’re the one who sees tails. Those other guys … they don’t matter.
A bizarre interpretation.
For example, caring about “living until tomorrow” does not normally mean assigning a zero probability to death in the interim. If anything that would tend to make you fearless—indifferent to whether you stepped in front of a bus or not—the very opposite of what we normally mean by “caring” about some outcome.
Thanks. That makes it a lot clearer.
It seems like this “caring” could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the “heads” worlds even after I learned that I’m not in them. Wouldn’t I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?
I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.
See http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ for more details. Many of my later posts can be considered explanations/justifications for the “design choices” I made in that post.