It’s easier to think about unpredictability without picturing Many Worlds—e.g. do we say “don’t worry about driving too fast because there will be plenty of worlds where we don’t kill anybody?”
Yes, the problem is that it is easy to imagine Many Worlds… incorrectly.
We care about the ratio of branches where we survive, and yet, starting with Big Bang, the ratio of branches where we ever existed is almost zero. So, uhm, why exactly should we be okay about this almost zero, but be very careful about not making it even smaller? But this is what we do (before we start imagining Many Worlds).
So for proper thinking perhaps it is better to go with collapse intepretation. (Until someone starts making incorrect conclusions about mysterious properties of randomness, in which case it is better to think about Many Worlds for a moment.)
Perhaps instead of immediately giving up and concluding that it’s impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.
If someone is able to understand 10% as 10%, then this works. But most people don’t. This is why CFAR uses the calibration game.
People buy lottery tickets with chances to win smaller than one in a million, and put a lot of emotions in them. Imagine that instead you have a quantum event that happens with probability one in a million. Would the same people feel correctly about it?
In situations like these, I find Many Worlds useful for correcting my intuitions (even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.). For example, if I had the lottery ticket, I could imagine million tiny slices of my future, and would notice that in the overwhelming majority of them nothing special happened; so I shouldn’t waste my time obsessing about the invisible.
Similarly, if a probability of succeeding in something is 10%, a person can just wave their hands and say “whatever, I feel lucky”… or imagine 10 possible futures, with labels: success, failure, failure, failure, failure, failure, failure, failure, failure, failure. (There is no lucky in the Many Worlds; there are just multiple outcomes.)
Specifically, for a quantum suicide, imagine a planet-size graveyard, cities and continents filled with graves, and then zoom in to one continent, one country, one city, one street, and amoung the graves find a sole survivor with a gigantic heap of gold saying proudly: “We, the inhabitants of this planet, are so incredibly smart and rich! I am sure all people from other planets envy our huge per capita wealth!” Suddenly it does not feel like a good idea when someone proposes you that your planet should do the same thing.
even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.
This seems a rather significant qualification. Why can’t we say that the MW interpretation is something that can be applied to any process which we are not in a position to predict? Why is it only properly a description of quantum uncertainty? I suspect many people will answer in terms of the subjective/objective split, but that’s tricky terrain.
If it is about quantum uncertaintly, assuming our knowledge of quantum physics is correct, the calculated probabilities will be correct. And there will be no hidden variables, etc.
If instead I just say “the probability of rain tomorrow is 50%”, then I may be (1) wrong about the probability, and my model also does not include the fact that I or someone else (2) could somehow influence the weather. Therefore modelling subjective probabilities as Many Worlds would provide unwarranted feeling of reliability.
Having said this, we can use something similar to Many Worlds by describing a 80% probability by saying—in 10 situations like this, I will be on average right in 8 of them and wrong in 2 of them.
There is just the small difference that it is about “situations like this”, not this specific situation. For example the specific situation may be manipulated. Let’s say I feel 80% certainty and someone wants to bet money against me. I may think outside of the box and realise: wait a moment, people usually don’t offer me bets, so what is different about this specific situation that this person decided to make a bet? Maybe they have some insider information that I am missing. And by reflecting this I reduce my certainty. -- While in a quantum physics situation, if my model says that with 80% probability something will happen, and someone offers to make bets, I would say: yes, sure.
Thanks, I think I understand that, though I would put it slightly differently, as follows…
I normally say that probability is not a fact about an event, but a fact about a model of an event, or about our knowledge of an event, because there needs to be an implied population, which depends on a model. When speaking of “situations like this” you are modelling the situation as belonging to a particular class of situations whereas in reality (unlike in models) every situation is unique. For example, I may decide the probability of rain tomorrow is 50% because that is the historic probability for rain where I live in late July. But if I know the current value of the North Atlantic temperature anomaly, I might say that reduces it to 40% - the same event, but additional knowledge about the event and hence a different choice of model with a smaller population (of rainfall data at that place & season with that anomaly) and hence a greater range of uncertainty. Further information could lead to further adjustments until I have a population of 0 previous events “like this” to extrapolate from!
Now I think what you are saying is that subject to the hypothesis that our knowledge of quantum physics is correct, and in the thought experiment where we are calculating from all the available knowledge about the initial conditions, that is the unique case where there is nothing more to know and no other possible correct model—so in that case the probability is a fact about the event as well. The many worlds provide the population, and the probability is that of the event being present in one of those worlds taken at random.
Incidentally, I’m not sure where my picture of probability fits in the subjective/objective classification. Probabilities of models are objective facts about those models, probabilities of events that involve “bets” about missing facts are subjective, while what I describe is dependent on the subject’s knowledge of circumstantial data but free of bets, so I’ll call it semi-subjective until somebody tells me otherwise!
Yeah, that’s it. In case of quantum event, the probability (or indexical uncertainty) is in the territory; but in both quantum and non-quantum events, there is a probability in the map, just for different reasons.
In both cases we can use Many Worlds as a tool to visualize what those probabilities in the map mean. But in the case of non-quantum events we need to remember that there can be a better map with different probabilities.
In replying initially, I assumed that “indexical uncertainty” was a technical terms for a variable that plays the role of probability given that in fact “everything happens” in MW and therefore everything strictly has a probability of 1. However, now I have looked up “indexical uncertainty” and find that it means an observer’s uncertainty as to which branch they are in (or more generally, uncertainty about one’s position in relation to something even though one has certain knowledge of that something). That being so, I can’t see how you can describe it as being in the territory.
Incidentally, I have now added an edit to the quantum section of the OP.
I can’t see how you can describe it as being in the territory.
I probably meant that the fact that indexical uncertainty is unavoidable, is part of the territory.
You can’t make a prediction about what exactly will happen to you, because different things will happen to different versions of you (thus, if you make any prediction of a specific outcome now, some future you will observe it was wrong). This inability to predict a specific outcome feels like probability; it feels like a situation where you don’t have perfect knowledge.
So it would be proper to say that “unpredictability of a specific outcome is part of the territory”—the difference is that one model of quantum physics believes there is intrinsic randomess involved, other model believes that in fact multiple specific outcomes happen (in different branches).
Great. Incidentally, that seems a much more intelligible use of “territory” and “map” than in the Sequence claim that a Boeing 747 belongs to the map and its constituent quarks to the territory.
Yes, the problem is that it is easy to imagine Many Worlds… incorrectly.
We care about the ratio of branches where we survive, and yet, starting with Big Bang, the ratio of branches where we ever existed is almost zero. So, uhm, why exactly should we be okay about this almost zero, but be very careful about not making it even smaller? But this is what we do (before we start imagining Many Worlds).
So for proper thinking perhaps it is better to go with collapse intepretation. (Until someone starts making incorrect conclusions about mysterious properties of randomness, in which case it is better to think about Many Worlds for a moment.)
Perhaps instead of immediately giving up and concluding that it’s impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.
If someone is able to understand 10% as 10%, then this works. But most people don’t. This is why CFAR uses the calibration game.
People buy lottery tickets with chances to win smaller than one in a million, and put a lot of emotions in them. Imagine that instead you have a quantum event that happens with probability one in a million. Would the same people feel correctly about it?
In situations like these, I find Many Worlds useful for correcting my intuitions (even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.). For example, if I had the lottery ticket, I could imagine million tiny slices of my future, and would notice that in the overwhelming majority of them nothing special happened; so I shouldn’t waste my time obsessing about the invisible.
Similarly, if a probability of succeeding in something is 10%, a person can just wave their hands and say “whatever, I feel lucky”… or imagine 10 possible futures, with labels: success, failure, failure, failure, failure, failure, failure, failure, failure, failure. (There is no lucky in the Many Worlds; there are just multiple outcomes.)
Specifically, for a quantum suicide, imagine a planet-size graveyard, cities and continents filled with graves, and then zoom in to one continent, one country, one city, one street, and amoung the graves find a sole survivor with a gigantic heap of gold saying proudly: “We, the inhabitants of this planet, are so incredibly smart and rich! I am sure all people from other planets envy our huge per capita wealth!” Suddenly it does not feel like a good idea when someone proposes you that your planet should do the same thing.
This seems a rather significant qualification. Why can’t we say that the MW interpretation is something that can be applied to any process which we are not in a position to predict? Why is it only properly a description of quantum uncertainty? I suspect many people will answer in terms of the subjective/objective split, but that’s tricky terrain.
If it is about quantum uncertaintly, assuming our knowledge of quantum physics is correct, the calculated probabilities will be correct. And there will be no hidden variables, etc.
If instead I just say “the probability of rain tomorrow is 50%”, then I may be (1) wrong about the probability, and my model also does not include the fact that I or someone else (2) could somehow influence the weather. Therefore modelling subjective probabilities as Many Worlds would provide unwarranted feeling of reliability.
Having said this, we can use something similar to Many Worlds by describing a 80% probability by saying—in 10 situations like this, I will be on average right in 8 of them and wrong in 2 of them.
There is just the small difference that it is about “situations like this”, not this specific situation. For example the specific situation may be manipulated. Let’s say I feel 80% certainty and someone wants to bet money against me. I may think outside of the box and realise: wait a moment, people usually don’t offer me bets, so what is different about this specific situation that this person decided to make a bet? Maybe they have some insider information that I am missing. And by reflecting this I reduce my certainty. -- While in a quantum physics situation, if my model says that with 80% probability something will happen, and someone offers to make bets, I would say: yes, sure.
Thanks, I think I understand that, though I would put it slightly differently, as follows…
I normally say that probability is not a fact about an event, but a fact about a model of an event, or about our knowledge of an event, because there needs to be an implied population, which depends on a model. When speaking of “situations like this” you are modelling the situation as belonging to a particular class of situations whereas in reality (unlike in models) every situation is unique. For example, I may decide the probability of rain tomorrow is 50% because that is the historic probability for rain where I live in late July. But if I know the current value of the North Atlantic temperature anomaly, I might say that reduces it to 40% - the same event, but additional knowledge about the event and hence a different choice of model with a smaller population (of rainfall data at that place & season with that anomaly) and hence a greater range of uncertainty. Further information could lead to further adjustments until I have a population of 0 previous events “like this” to extrapolate from!
Now I think what you are saying is that subject to the hypothesis that our knowledge of quantum physics is correct, and in the thought experiment where we are calculating from all the available knowledge about the initial conditions, that is the unique case where there is nothing more to know and no other possible correct model—so in that case the probability is a fact about the event as well. The many worlds provide the population, and the probability is that of the event being present in one of those worlds taken at random.
Incidentally, I’m not sure where my picture of probability fits in the subjective/objective classification. Probabilities of models are objective facts about those models, probabilities of events that involve “bets” about missing facts are subjective, while what I describe is dependent on the subject’s knowledge of circumstantial data but free of bets, so I’ll call it semi-subjective until somebody tells me otherwise!
Yeah, that’s it. In case of quantum event, the probability (or indexical uncertainty) is in the territory; but in both quantum and non-quantum events, there is a probability in the map, just for different reasons.
In both cases we can use Many Worlds as a tool to visualize what those probabilities in the map mean. But in the case of non-quantum events we need to remember that there can be a better map with different probabilities.
In replying initially, I assumed that “indexical uncertainty” was a technical terms for a variable that plays the role of probability given that in fact “everything happens” in MW and therefore everything strictly has a probability of 1. However, now I have looked up “indexical uncertainty” and find that it means an observer’s uncertainty as to which branch they are in (or more generally, uncertainty about one’s position in relation to something even though one has certain knowledge of that something). That being so, I can’t see how you can describe it as being in the territory.
Incidentally, I have now added an edit to the quantum section of the OP.
I probably meant that the fact that indexical uncertainty is unavoidable, is part of the territory.
You can’t make a prediction about what exactly will happen to you, because different things will happen to different versions of you (thus, if you make any prediction of a specific outcome now, some future you will observe it was wrong). This inability to predict a specific outcome feels like probability; it feels like a situation where you don’t have perfect knowledge.
So it would be proper to say that “unpredictability of a specific outcome is part of the territory”—the difference is that one model of quantum physics believes there is intrinsic randomess involved, other model believes that in fact multiple specific outcomes happen (in different branches).
OK, thanks, I see no problems with that.
Great. Incidentally, that seems a much more intelligible use of “territory” and “map” than in the Sequence claim that a Boeing 747 belongs to the map and its constituent quarks to the territory.