I have now added a hopefully suitable paragraph to the post.
RogerS
In replying initially, I assumed that “indexical uncertainty” was a technical terms for a variable that plays the role of probability given that in fact “everything happens” in MW and therefore everything strictly has a probability of 1. However, now I have looked up “indexical uncertainty” and find that it means an observer’s uncertainty as to which branch they are in (or more generally, uncertainty about one’s position in relation to something even though one has certain knowledge of that something). That being so, I can’t see how you can describe it as being in the territory.
Incidentally, I have now added an edit to the quantum section of the OP.
Great. Incidentally, that seems a much more intelligible use of “territory” and “map” than in the Sequence claim that a Boeing 747 belongs to the map and its constituent quarks to the territory.
Thanks, so to get back to the original question of how to describe the different effects of divergence and convergence in the context of MW, here’s how it’s seeming to me. (The terminology is probably in need of refinement).
Considering this in terms of the LW-preferred Many Worlds interpretation of quantum mechanics, exact “prediction” is possible in principle but the prediction is of the indexical uncertainty of an array of outcomes. (The indexical uncertainty governs the probability of a particular outcome if one is considered at random.) Whether a process is convergent or divergent on a macro scale makes no difference to the number of states that formally need to be included in the distribution of possible outcomes. However, in the convergent process the cases become so similar that there appears to be only one outcome at the macro scale; whereas in a divergent process the “density of probability” (in the above sense) becomes so vanishingly small for some states that at a macro scale the outcomes appear to split into separate branches. (They have become decoherent.) Any one such branch appears to an observer within that branch to be the only outcome, and so such an observer could not have known what to “expect”—only the probability distribution of what to expect. This can be described as a condition of subjective unpredictability, in the sense that there is no subjective expectation that can be formed before the divergent process which can be reliably expected to coincident with observation after the process.
There are no discrete “worlds” and “branches” in quantum physics as such.
This seems to conflict with references to “many worlds” and “branch points” in other comments, or is the key word “discrete”? In other words, the states are a continuum with markedly varying density so that if you zoom out there is the appearance of branches? I could understand that expect for cases like Schroedinger’s cat where there seems to be a pretty clear branch (at the point where the box is opened, i.e. from the point of view of a particular state if that is the right terminology).
Once two regions in state space are sufficiently separated to no longer significantly influence each other...
From the big bang there are an unimaginably large number of regions in state space each having an unimaginably small influence. It’s not obvious, but I can perfectly well believe that the net effect is dominated by the smallness of influence, so I’ll take your word for it.
Thanks, I think I understand that, though I would put it slightly differently, as follows…
I normally say that probability is not a fact about an event, but a fact about a model of an event, or about our knowledge of an event, because there needs to be an implied population, which depends on a model. When speaking of “situations like this” you are modelling the situation as belonging to a particular class of situations whereas in reality (unlike in models) every situation is unique. For example, I may decide the probability of rain tomorrow is 50% because that is the historic probability for rain where I live in late July. But if I know the current value of the North Atlantic temperature anomaly, I might say that reduces it to 40% - the same event, but additional knowledge about the event and hence a different choice of model with a smaller population (of rainfall data at that place & season with that anomaly) and hence a greater range of uncertainty. Further information could lead to further adjustments until I have a population of 0 previous events “like this” to extrapolate from!
Now I think what you are saying is that subject to the hypothesis that our knowledge of quantum physics is correct, and in the thought experiment where we are calculating from all the available knowledge about the initial conditions, that is the unique case where there is nothing more to know and no other possible correct model—so in that case the probability is a fact about the event as well. The many worlds provide the population, and the probability is that of the event being present in one of those worlds taken at random.
Incidentally, I’m not sure where my picture of probability fits in the subjective/objective classification. Probabilities of models are objective facts about those models, probabilities of events that involve “bets” about missing facts are subjective, while what I describe is dependent on the subject’s knowledge of circumstantial data but free of bets, so I’ll call it semi-subjective until somebody tells me otherwise!
So, to get this clear (being well outside my comfort zone here), once a split into two branches has occurred, they no longer influence each other? The integration over all possibilities is something that happens in only one of the many worlds? (My recent understanding is based on “Everything that can happen does happen” by Cox & Forshaw).
even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.
This seems a rather significant qualification. Why can’t we say that the MW interpretation is something that can be applied to any process which we are not in a position to predict? Why is it only properly a description of quantum uncertainty? I suspect many people will answer in terms of the subjective/objective split, but that’s tricky terrain.
you can consider the whole universe as a big quantum computer, and you’re living in it
I recall hearing it argued somewhere that it’s not so much “a computer” as “the universal computer” in the sense that it is impossible to principle for there to be another computer performing the calculations from the same initial conditions (and for example getting to a particular state sooner). I like that if it’s true. The calculations can be performed, but only by existing.
the multiverse as a whole evolves deterministically
So to get back to my question of what predictability means in a QM universe under MW, the significant point seems to be that prediction is possible starting from the initial conditions of the Big Bang, but not from a later point in a particular universe (without complete information about the all other universes that have evolved from the Big Bang)?
the truth-value of the claim, which is what we’re discussing here
More precisely, it’s what you’re discussing. (Perhaps you mean I should be!) In the OP I discussed the implications of an infinitely divisible system for heuristic purposes without claiming such a system exists in our universe. Professionally, I use Newtonian mechanics to get the answers I need without believing Einstein was wrong. In other words, I believe true insights can be gained from imperfect accounts of the world (which is just as well, since we may well never have a perfect account). But that doesn’t mean I deny the value of worrying away at the known imperfections.
Well, I didn’t quite say “choose what is true”. What truth means in this context is much debated and is another question. The present question is to understand what is and isn’t predictable, and for this purpose I am suggesting that if the experimental outcomes are the same, I won’t get the wrong answer by imagining CI to be true, however unparsimonious. If something depends on the whether an unstable nucleus decays earlier or later than its half life, I don’t see how the inhabitants of the world where it has decayed early and triggered a tornado (so to speak) will benefit much by being confident of the existence of a world where it decayed late. Or isn’t that the point?
I agree, I had thought of mentioning this but it’s tricky. As I understand it, living in one of Many Worlds feels exactly like living in a single “Copenhagen Interpretation” world, and the argument is really over which is “simpler” and generally Occam-friendly—do you accept an incredibly large number of extra worlds, or an incredibly large number of reasons why those other worlds don’t exist and ours does? So if both interpretations give rise to the same experience, I think I’m at liberty to adopt the Vicar of Bray strategy and align myself with whichever interpretation suits any particular context. It’s easier to think about unpredictability without picturing Many Worlds—e.g. do we say “don’t worry about driving too fast because there will be plenty of worlds where we don’t kill anybody?” But if anybody can offer a Many Worlds version of the points I have made, I’d be most interested!
Yes, that looks like a good summary of my conclusions, provided it is understood that “subsystems” in this context can be of a much larger scale than the subsystems within them which diverge. (Rivers converge while eddies diverge).
Perhaps “hedging” is another term that also needs expanding here. One can reasonably assume that Penrose’s analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you’ve explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from “strong” to “weak”. Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren’t.
“having different descriptions at different levels” is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory
Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that
my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.
Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or wrong models at quark level, atom level, crystal level, and engineering component level. At each level, the fact that one model is right and another wrong is a fact about reality: it is Talking about Territory. When we say a 747 wing is really there, we mean that (for example) visualising it as a saucepan will result in expectations that the results will not fulfil in the way that they will when visualising it as a wing. Indeed, we can have many different models of the wing, all equally correct—since they all result in predictions that conform to the same observations. The choice of correct model is what is in our head. The fact that it has to be (equivalent to) a model of a wing to be correct is in the Territory. In short, when Talking about Territory we can describe things at as many levels (of aggregation) as yield descriptions that can be tested against observation.
at different levels
What exactly is meant by “levels” here? The Naval Gunner is arguing about levels of approximation. The discussion of Boeing 747 wings is an argument about levels of aggregation. They are not the same thing. Treating the forces on an aircraft wing at the aggregate level is leaving out internal details that per se do not affect the result. There will certainly be approximations involved in practice, of course, but they don’t stem from the actual process of aggregation, which is essentially a matter of combining all the relevant force equations algebraically, eliminating internal forces, before solving them; rather than combining the calculated forces numerically.
...the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces
The way that reality works, as far as we can tell, is that there are basic ingredients, with their properties, which in any given system at any given instant exist in a particular configuration. Now reality is not just the ingredients but also the configuration—a wrong model of the configuration will give wrong predictions just as a wrong model of the ingredients will. The possible configurations include known stable structures. These structures are likewise real because any model of a configuration which cannot be transformed into a model which includes the identified structure in question is in conflict with reality. Physics is I understand it comprises (a) laws that are common to different configurations of the ingredients, and (b) laws that are common to different configurations of the known stable structures. Physicalism implies the belief that laws (b) are always consistent with laws (a) when both are sufficiently accurate.
...The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings
True but the key word here is “additional”. Newton’s laws were undoubtedly laws of physics, and in my school physics lessons were expressed in terms of forces on bodies, rather than on their constituent particles. The laws for forces on constituent particles were then derived from Newton’s laws by a thought experiment in which a body is divided up. In higher education today the reverse process is the norm, but reality is indifferent to which equivalent formulation we use: both give identical predictions.[Original wording edited]
General Relativity contains the additional causal entity known as space-time curvature, which is an aggregate effect of all the massive particles in the universe given their configuration so is not a natural fit in the Procrustean bed of reductionism. [Postscript] Interestingly, I’ve read that Newton was never happy with his idea of gravitation as a force of attraction between two things because it implied a property shared between the two things concerned and therefore being intrinsic to neither—but failed to find a better formulation.
The critical words are really and see
Indeed, but when you see a wing it is not just in the mind, it is also evidence of how reality is configured. It is the result of the experiment you perform by looking.
.. the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought
What the gunner really thought is pure speculation of course, but this assumption by EY raises an important point about meta-models.
In thought experiments the outcome is determined by the applicable universal laws – that’s meta-model (A). In any real-world case you need a model of the application as well as models of universal laws. That’s meta-model (B). An actual artillery shell will be affected by things like air resistance, so the greater accuracy of Einstein’s laws in textbook cases is no guarantee of it giving more accurate results in this case. EY obviously knew this, but his meta-model excluded it from consideration here. Treating the actual application as a case governed only by Newton’s or Einstein’s laws is itself a case of “Mind Projection Fallacy” – projecting meta-model (A) onto a real-world application. So it’s not a case of the gunner mistaking a model for reality, but of mistaking the criteria for choosing between one imperfect model and another. I imagine gunners are generally practical men, and in the field of the applied sciences it is very common for competing theories to have their own fields of application where they are more accurate than the alternatives – so although he was clearly misinformed, at least his meta-model was the right one.
[Postscript] An arguable version of reductionism is the belief that laws about the ingredients of reality are in some sense “more fundamental” than laws about stable structures of the ingredients. This cannot be an empirical truth, since both laws give the same predictions where they overlap so cannot be empirically distinguished. Neither is any logical contradiction implied by its negation. It can only be a metaphysical truth, whatever that is. Doesn’t it come down to believing Einstein’s essentialist concept of science against Bohr’s instrumentalist version? That science doesn’t just describe, but also tells? So pick Bohr as an opponent if you must, not some anonymous gunner.
I’m not clear what you are meaning by “spatial slice”. That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.
..absent collapse..
Ah, is that so.
But a 4D descriptions of all the changes involved in the copy-and-delete process would be sufficient..
Yes, I can see that that’s one way of looking at it.
In fact, your problem would be false positives
I don’t think so, since the information I would be comparing in this case (the “file contents”) would be just a reduction of the information in two regions of space-time.
Reducing to “physical properties” is not necessarily the same as to “the physical properties of the ingredients”. I would have thought physicalists think mental properties can be reduced to physical properties, but reductionists identify these with the physical properties of the ingredients. I suppose one way of looking at it is that when you say “in principle” the principles you refer to are physical principles, whereas when emergentists see obstacles as present “in principle” when certain kinds of complexity are present they are more properly described as mathematical principles.
Mental events can certainly be reduced to physical events, but I would take mental properties to be the properties of the set of all possible such events, and the possibility of connecting these to the properties of the brain’s ingredients even in principle is certainly not self-evident.
OK, thanks, I see no problems with that.