The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism.
I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.
I did not say that non-reductionism is absurd. I said that “recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs”.
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
Edit:
I googled “Robert Laughlin Reductionism” and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:
Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).
Yudkowsky has a great refutation of using the description “emergent”, at The Futility of Emergence, to describe phenomenon. From there:
I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane.
And even after the answer of “Why? Emergence!” is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.
Further down in the paper, we have this:
They point to higher organizing principles in nature, e.g. the principle of continuous symmetry breaking, localization, protection, and self-organization, that are insensitive to and independent of the underlying microscopic laws and often solely determine the generic low-energy properties of stable states of matter (‘quantum protectorates’) and their associated emergent physical phenomena. “The central task of theoretical physics in our time is no longer to write down the ultimate equations but rather to catalogue and understand emergent behavior in its many guises, including potentially life itself. We call this physics of the next century the study of complex adaptive matter” (Laughlin and Pines 2000).
Every time he makes the specific claim that reductionism makes worse predictions than a belief in “emergent phenomenon” in which “organizational structure” is an additional property that all of reality must have, in addition to “mass” and “velocity”, he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can’t always examine a system by looking at it from from a “higher” perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
He would rather suppose that the universe contains rules like “When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism” than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
He specifically objects that reductionism isn’t always the “most complete” description of a given phenomenon; that elements of a given phenomenon “cannot be explained” by looking at the underlying mechanism of that phenomenon.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
This is the only coherent way I could possibly imagine consciousness being an “emergent phenomenon”, or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?
At first when I read EY’s “The Futility of Emergence” article, I didn’t understand. It seemed to me that there’s no way people actually think of “emergence” as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn’t think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn’t mean you’ll be able to predict what the clock will say based on the positions of the gears (for sufficiently “complex” clocks). And so I thought that EY was jumping the gun in this fight.
But perhaps he read this very paper, because Laughlin uses the word “emergent phenomenon” to describe behavior he doesn’t understand, as if that’s an explanation for the phenomenon. Even though you can’t use this piece of information to make any predictions as to how reality is. Even though it doesn’t constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for “magic”; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe’s laws for the phenomenon.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn’t include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don’t think the laws of physics contain such a clause.
I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a “smart person”, but he isn’t smart enough to realize that calling the creation of humans from DNA an “emergent phenomenon” is literally equivalent to calling it a “magic phenomenon”, in that it doesn’t limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...
It’s a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers’ womb in other forms—for example, where there’s an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.
To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer’s circuitry isn’t going to provide much new information; and it will be a lot more complicated, and harder to understand. There’s a conceptual point, there, at the level of individual software instructions, where further reductionism doesn’t help to understand the phenomenon, and does make the analysis more complicated, and harder to work with.
A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.
The remaining information is present in the environment of the mothers’ womb in other forms—for example, where there’s an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.
As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don’t have any intuitive sense of how big the remaining search space would be.
And as a nod towards staying on topic:
a thorough electrical analysis of the computer’s circuitry isn’t going to provide much new information;
Well, it will, and it won’t.
If what I mostly care about is the computer’s behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed.
OTOH, if what I mostly care about is the computer’s behavior at the level of electrical flows through circuits (for example, if I’m trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won’t catch fire in ordinary use), then a thorough electrical analysis of the computer’s circuitry provides me with tons of indispensible new information.
What counts as “information” in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.
As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don’t have any intuitive sense of how big the remaining search space would be.
My intuition says “very, very big”. Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.
OTOH, if what I mostly care about is the computer’s behavior at the level of electrical flows through circuits (for example, if I’m trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won’t catch fire in ordinary use), then a thorough electrical analysis of the computer’s circuitry provides me with tons of indispensible new information.
I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.
the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.
Sure, I agree that one permissible solution is a decoder which produces an organism capable of cloning itself. And while I’m willing to discard as violating the spirit of the thought experiment decoder designs which discard the human DNA in its entirety and create a predefined organism (in much the same sense that I would discard any text-translation algorithm that discarded the input text and printed out the Declaration of Independence as a legitimate translator of the input text), there’s a large space of possibilities here.
Would you be willing to consider, i.e. not discard, a decoder that used the human DNA as merely a list of indexes, downloading the required genes from some sort of internal lookup table?
By changing the lookup table, one can dramatically change the resulting organism; and having a different result for every viable human DNA is merely a resut of having a large enough lookup table. It would be, to extend your metaphor, like a text-translation algorithm that returned the Declaration of Independance if given as input Alice in Wonderland, and returned Alice in Wonderland if given Hamlet.
an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise
Well, yes—but that arises from the fact that such devices are man-made, and (out of respect to our brains’ limitations) designed to isolate the layers of explanation from one another—to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.
The argument will not apply to things not man-made.
Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.
Let me amend that: the argument will not necessarily apply to things not man-made. There is a categorical difference in this respect between man-made things and the rest, and my intent was to say: “if you’re going to put up an argument against reductionism, don’t use examples of man-made things”.
Whereas we have good reasons to bar “leaky abstractions” from our designs, Nature labors under no such constraint. If it turns out that some particular process that happens in a superfluid helium can be understood only by referring to the quark level, we are not allowed to frown at Nature and say “oh, poor design; go home, you’re drunk”.
For instance, it turns out we can almost describe the universe in the Newtonian model with its relatively simple equations, a nice abstraction if it were non-leaky, but anomalies like the precession of Mercury turn up that require us to use General Relativity instead, and take it into account when building our GPS systems.
The word “futile” in this context strikes me as wishful thinking, projecting onto reality our parochial notion of how complicated a reductionistic account of the universe “should” be. Past experience tells us that small anomalies sometimes require the overthrow of entires swathes of science, in the name of reductionism: there keep turning up cases where science considers it necessary, not futile, to work things out in terms of the lower levels of description.
I think you are making a bad generalization when you turn to Newtonian mechanics vs. general relativity. There are important ways in which mesons and hadron are emergent from quarks that have no correspondence to the relationship between Newtonian mechanics and GR.
As length scales increase, quarks go from being loosely bound fundamental degrees of freedom to not-even-good-degrees-of-freedom. At ‘normal’ length scales, free quarks aren’t even allowed. The modern study of materials is also full of examples of emergence (it underlies much work on renormalization groups), although its farther from my expertise so the only example to spring to mind was liquid helium.
The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don’t see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.
And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.
Such situations are rare, but not entirely unknown.
I disagree with your entire premise. I think we should pin down this concept of “levels of perspective” with some good jargon at some point, but regardless...
You can look at a computer from the level of perspective of “there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows.” This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.)
You might also see the perspective, “There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways.” This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description.
Another level down, the description might be: “There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set (“assembly language”). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user”. We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and therefore can be said to actually “understand” the system.
But still yet there are lower levels. Describing the actual logic gate organization in the CPU, the system used by RAM to store variables, how the magnetic needle accesses a specific bit on the hard drive by spinning it… All of these things must be known and understood in order to rebuild a computer from scratch.
Humans designed the computer at the level of “logic gates”, “bits on a hard drive”, “registries”, etc, and so it is not necessary to go deeper than this to understand the entire system (just as you don’t have to go deeper than “gears and cogs” to understand how a clock works, or how you don’t have to go deeper than “classical physics (billiards balls bouncing into each other)” to understand how a brain works.
But I hope that it’s clear that the mechanisms at the lower levels of a system completely contain within them the behavior of the higher levels of the system. There are no new behaviors which you can only learn about by studying the system from a higher level of perspective; those complicated upper-level behaviors are entirely formed by the simple lower-level mechanisms, all the way down to the wave function describing the entire universe.
That is what reductionism means. If you know the state of the entire wavefunction describing the universe, you know everything there is to know about the universe. You could use it to predict that, in some everette branches, the assassination of Franz Ferdinand on the third planet from the star Sol in the milky way galaxy would cause a large war on that planet. You could use it to predict the exact moment at which any particular “slice” of the wavefunction (representing a particular possible universe) will enter its maximum entropy state. You could use it to predict any possible behavior of anything and you will never be surprised. That is what it means to say that all of reality reduces down to the base-level physics. That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.
If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization (which would require, by the way, all of reality to keep track not only of mass and of velocity but also of its organizational structure relative to nearby reality), then I will accept such a demonstration as being a complete and utter refutation of reductionism. But there is no such behavior.
You are right; my example was a bad one, and it does not support the point that I thought it supported. The mere fact that something takes unreasonably long to calculate does not mean that it is not an informative endeavour. (I may have been working from a bad definition of reductionism).
If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization
Um. I suspect that this may have been poorly phrased. If I have a lump of carbon, quite a bit of water, and a number of other elements, and I just throw them together in a pile, they’re unlikely to do much—there may be a bit of fizzing, some parts might dissolve in the water, but that’s about it. Yet if I reorganise the same matter into a human, I have an organisation of matter that is able to enter into a debate about reductionism; which I don’t think can be predicted by looking at the individual chemical elements alone.
But that behaviour might still be predictable from looking at the matter, organised in that way, at its most basic level of perspective (given sufficient computing resources). Hence, I suspect that it is not a counter-example.
That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.
That’s a fusion of reductionism and determinism. Reductionism ins’t necessarily false in an indeterministic universe. What is more pertinent is being able to predict higher level properties and laws from lower level properties and laws. (synchronously, in the latter case).
No it isn’t? I did not mean you would be able to make predictions which came true 100% of the time. I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.
“A property of a system is said to be emergent if it is in some sense more than the “sum” of the properties of the system’s parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to various sub-varieties of emergence.”—WP
I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.
Still deterinism, not reductionism. In a universe where
*1aTthere are lower-level-properties ..
*1b operating according to a set of deterministic laws.
*2a There are also higher-level properties..
*2b irreducible to and unpredictable from the lower level properties and laws...
*2c which follow their own deterministic laws.
You would be able to predict the future with complete accuracy, given both sets of laws and two
sets of starting conditions. Yet the universe being described is explicitly non-reductionistic.
2a There are also higher-level properties..
2b irreducible to and unpredictable from the lower level properties and laws...
This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like “when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave”. Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality.
But you are right:
unpredictable from lower level properties
Which is what I said:
That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic [lowest] level of perspective.
Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states.
Such laws are still fundamental laws, on the lowest level of the universe.
In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no.
Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
Top-down causation maps macrophysical states to microphysical states
Can you name any examples of such a phenomenon?
“Deterministic” typically means that an unbounded agent will achieve probabilities of 1.0.
Oh, well in that case quantum physics throws determinism out the window for sure. I still think there’s something to be said for correctly assigning subjective probabilities to your anticipations such that 100% of the time you think something will happen with a 50% chance, it happens half the time, i.e. you are correctly calibrated.
An unbounded agent in our universe would be able to achieve such absolutely correct calibration; that’s all I meant to imply.
I did not say that non-reductionism is absurd. I said that “recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs”.
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level
laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack
of laws failing to collapse down to the lowest level.
Who are to enact that the laws governing the behavior of particles are more ultimate than the transcendent, emergent laws of the collective they generate, such as the principles of organization responsible for emergent behavior? According to the physicist George F. R. Ellis true complexity emerges as higher levels of order from, but to a large degree independent of, the underlying low-level physics. Order implies higher-level systemic organization that has real top-down effects on the behavior of the parts at the lower level. Organized matter has unique properties (Ellis 2004).
There’s two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent
higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound
waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent
laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.
And even after the answer of “Why? Emergence!” is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.
I don’t see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of
emergent laws (in their rather weak sense of “emergent”). For another, they are not calling on emergence itself as doing
any explaining. “Emergence isn’t explanatory” doesn’t refute “emergence is true”. For a third, I don’t see any
absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where
a problem is so fuzzilly defined that it is hard even to identify the “sides”, then one can’t say that one
side is “absurd”.
Every time he makes the specific claim that reductionism makes worse predictions than a belief in “emergent phenomenon” in which “organizational structure” is an additional property that all of reality must have, in addition to “mass” and “velocity”, he cites himself for this.
Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws
that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.
He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
EY can’t do that for MWI either. Maybe it isn’t all about prediction.
A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
That’s robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.
He would rather suppose that the universe contains rules like “When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism” than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
Reudctionism is an approach that can succeed or fail. It isn’t true apriori. If reductionism failed, would you
say that we should not even contemplate non-reductionism? Isn’t that a bit like eEinstein’s stubborn opposition to QM?
He specifically objects that reductionism isn’t always the “most complete” description
I suppose you mean that the reductionistic explanation isn’t always the most complete explanation...well
everything exists in a context.
of a given phenomenon; that elements of a given phenomenon “cannot be explained” by looking at the underlying mechanism of that phenomenon.
There is no apriori guarantee that such an explanation will be complete.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness,
That isn’t the emergentist claim at all.
then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level.
Why? Because you described them as “laws of physics”? An emergentist wouldn’t. Your objections seem to assume
that some kind of reductionism+determinism combination is true ITFP. That’s just gainsaying the emergentist claim.
Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties.
And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level
properties.
his is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
Cross-level laws aren’t “laws of physics”. Emergentists may need to assume that microphysical laws have “elbow room”,
in order to avoid overdetermination, but that isn’t obviously wrong or absurd.
At first when I read EY’s “The Futility of Emergence” article, I didn’t understand. It seemed to me that there’s no way people actually think of “emergence” as being a scientific explanation for how a phenomenon occurs
such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually.
Can you predict qualia from brain-states?
I didn’t think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn’t mean you’ll be able to predict what the clock will say based on the positions of the gears (for sufficiently “complex” clocks).
Mechanisms have to break down into their components because they are built up from components. And emergentists
would insist that that does not generalise.
But perhaps he read this very paper, because Laughlin uses the word “emergent phenomenon” to describe behavior he doesn’t understand, as if that’s an explanation for the phenomenon.
Or as a hint about how to go about understanding them.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism,
That’s not what E-ism says at all.
and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe?
That’s an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.
That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn’t include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
What’s supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?
I read the whole paper by Laughlin and I was unimpressed.
The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn’t a dispute where one side is “absurd”.
I don’t think you understand Laughlin’s point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can’t understand their differences by by going to a lower level of description.
Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.
Why didn’t he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).
Also, I’m pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can’t make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that’s what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don’t know exactly what “individual behavior” could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.
Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you’ll find that your measurements agree with this: 50% of the time you’ll get the first measurement, and 50% of the time you’ll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn’t show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.
Besides, it’s not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.
I don’t know why Laughlin wrote what he did, you didn’t link to the paper. However, he comes from a world where solid state physics is obvious, and “everyone knows” various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics “emergent” is a technical, defined concept.
Your explanation of superfluid helium isn’t coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn’t sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don’t think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.
I’ll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what’s happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.
If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.
With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?
That’s what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?
I don’t think you understand what spontaneous symmetry breaking is
I probably don’t. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for “Spontaneous Symmetry Breaking” is called “A pedagogical example: the Mexican hat potential”, and so I cannot be entirely off.
In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you’s (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn’t broken.
Since you’re a particle physicist and you disagree with this outlook, I’m sure there’s something wrong with it, though.
In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter.
Is this similar to saying that when you are modeling how an airplane flies, you don’t need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about “air pressure”, and your model will still be accurate? I agree with you; modeling every single particle when you’re trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.
But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.
So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of ‘emergence’ as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.
A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.
In a sense this
if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this.
seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.
there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation
Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like “examples of top down causality”, “against reductionism”, “nonreductionist explanation of”), and indeed I have a hard time finding anything relevant at all, but I can’t find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.
The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.
In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.
Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I’m likely to interpret downvotes in a noisy, haphazard way.
Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.
From a “purely reductionist stand-point” you would still need to know the initial conditions to predict how the system evolves. Yet you act as if this is a knockdown argument against reductionism.
I was just trying to make my point clearer, its suggestive, not a knock out. I think the knock-out argument for a strict reductionism is the renormalization argument.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
Your argument was not even suggestive, it was just wrong, because it ignores that a reductionist account would look at the initial conditions.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
I don’t think that anyone is arguing that modeling physics at a high level of abstraction is not useful. It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
The point/thrust of JohnWittle’s that I’m arguing against is that the idea of emergent phenomena is inherently silly/stupid, and a ‘magical word’ to gloss over fuzzy thinking. I chose two very different systems in an attempt to show how incredibly sensitive to initial conditions physics can be, which makes the reductionist account (in many instances) the wrong approach. I apologize if this was not clear (and if you were a downvoter, I sincerely appreciate the feedback). Is my point more clear now? (I have resisted the urge to rephrase my original to try to add clarity)
I also purposely chose two systems I believe have emergent behavior (super fluid helium certainly does, biological entities/bacteria were postulated to by Laughlin). Originally I was going to say more about superfuid helium before I realized how much I was going to have to write and decided spontaneous symmetry breaking was much clearer.
It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
Sure, but also its important to remember that there exist aggregate behaviors that don’t depend on the microphysics in a meaningful way (the high energy modes decouple and integrate out entirely), and as such can only be meaningfully understood in the aggregate. This is a different issue than the Newtonian physics/GR issue (Newtonian mechanics is a limit of GR, not an emergent theory based on GR, the degrees of freedom are the same).
My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that’s only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.
When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.
In another sense, non-math geniuses advocating for reductionism are no better than the anti-vaccine lobby.
What sense is that?
The sense in which they did not come about their beliefs based on starting with sane priors which did not presuppose reductionism, and then update on evidence until they independently discovered reductionism.
I disagree with the grandparent, however: I believe that (most) non-math-geniuses advocating for reductionism are more akin to Einstein believing in General Relativity before any novel predictions had been verified: recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs.
The “absurdity” of non-reductionism seems to have evaded Robert Laughlin, Jaorn Lanier and a bunch of other smart people.
I did not say that non-reductionism is absurd. I said that “recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs”.
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
Can you explain to me how it might work?
Edit: I googled “Robert Laughlin Reductionism” and actually found a longish paper he wrote about reductionism and his beliefs. I have some criticisms:
Yudkowsky has a great refutation of using the description “emergent”, at The Futility of Emergence, to describe phenomenon. From there:
Further down in the paper, we have this:
Every time he makes the specific claim that reductionism makes worse predictions than a belief in “emergent phenomenon” in which “organizational structure” is an additional property that all of reality must have, in addition to “mass” and “velocity”, he cites himself for this. He also does not provide any evidence for non-reductionism over reductionism; that is, he cannot name a single prediction where non-reductionism was right, and reductionism was wrong.
He goes on to say that reductionism is popular because you can always examine a system by looking at its internal mechanisms, but you can’t always examine a system by looking at it from from a “higher” perspective. A good example, he says, is genetic code: to assume that dna is actually a complete algorithmic description of how to build a human body is an illogical conclusion, according to him.
He would rather suppose that the universe contains rules like “When a wavefunction contains these particular factorizations which happen not to cancel out, in a certain organizational structure, use a different mechanism to decide possible outcomes instead of the normal mechanism” than suppose that the laws of physics are consistent throughout and contain no such special cases. From the standpoint of simplicity, reductionism is simpler than non-reductionism, since non-reductionism is the same thing as reductionism except with the addition of special cases.
He specifically objects that reductionism isn’t always the “most complete” description of a given phenomenon; that elements of a given phenomenon “cannot be explained” by looking at the underlying mechanism of that phenomenon.
I think this is nonsense. Even supposing that the laws of physics contain special cases for things like creating a human body out of DNA, or for things like consciousness, then in order for such special case exceptions to actually be implemented by the universe, they must be described in terms of the bottom-most level. Even if a DNA strand is not enough information to create a human being, and the actual program which creates the human being is hard coded into the universe, the object that the program must manipulate is still the most basic element of reality, the wavefunction, and therefore the program must specify how certain amplitude configurations must evolve, and therefore the program must describe reality on the level of quarks.
This is still reductionism, it is just reductionism with the assumed belief that the laws of physics were designed such that certain low-level effects would take place if certain high-level patterns came about in the wavefunction.
This is the only coherent way I could possibly imagine consciousness being an “emergent phenomenon”, or the creation of a human body from the blueprints of DNA being impossible without additional information. Do you suppose Laughlin was saying something else?
At first when I read EY’s “The Futility of Emergence” article, I didn’t understand. It seemed to me that there’s no way people actually think of “emergence” as being a scientific explanation for how a phenomenon occurs such that you could not predict that the phenomenon would occur if you know how every piece of the system worked individually. I didn’t think it possible that anyone would actually think that knowing how all of the gears in a clock work doesn’t mean you’ll be able to predict what the clock will say based on the positions of the gears (for sufficiently “complex” clocks). And so I thought that EY was jumping the gun in this fight.
But perhaps he read this very paper, because Laughlin uses the word “emergent phenomenon” to describe behavior he doesn’t understand, as if that’s an explanation for the phenomenon. Even though you can’t use this piece of information to make any predictions as to how reality is. Even though it doesn’t constrain your anticipation into fewer possibilities, which is what real knowledge does. He uses this word as a substitute for “magic”; he does not know how an extremely complex phenomenon works, and so he supposes that the actual mechanism for the phenomenon is not enough to fully explain the phenomenon, that additional aspects of the phenomenon are simply uncaused, or that there is a special-case exclusion in the universe’s laws for the phenomenon.
He does not explore the logical implications of this belief: that holding the belief that some aspects of a phenomenon have no causal mechanism, and therefore could not have possibly been predicted. He makes the claim that a hypothetical Theory of Everything would not be able to explain some of the things we find interesting about some phenomenon. Does he believe that if we programmed a physics simulator with the Correct Theory of Everything, and fed it the boundary conditions of the universe, then that simulated universe would not look exactly like our universe? That the first time DNA occurred on earth, in that simulated universe, it would not be able to create life (unlike in our universe) because we didn’t include in the laws of physics a special clause saying that when you have DNA, interpret it and then tell the quarks to move differently from how they would have?
I believe that DNA contains real instructions for how to construct an entire human from start to finish. I don’t think the laws of physics contain such a clause.
I read the whole paper by Laughlin and I was unimpressed. If this is the best argument against reductionism, then reductionism is undoubtedly the winner. You called Laughlin a “smart person”, but he isn’t smart enough to realize that calling the creation of humans from DNA an “emergent phenomenon” is literally equivalent to calling it a “magic phenomenon”, in that it doesn’t limit your anticipation of what could happen. If you can equally explain every possible outcome, you have no knowledge...
It’s a bit of an aside to your main point, but there are good arguments to support the assertion that DNA is only a partial recipe for an organism, such as a human. The remaining information is present in the environment of the mothers’ womb in other forms—for example, where there’s an ambiguity in the DNA with regards to the folding of a certain protein, other proteins present in the womb may correct any incorrectly folded samples.
To look at your main point; if I were to present an argument against reductionism, I would point to the personal computer. This is a device constructed in order to run software; that is, to follow a list of instructions that manipulate binary data. Once you have a list of all the instructions that the computer can follow, and what these instructions do, a thorough electrical analysis of the computer’s circuitry isn’t going to provide much new information; and it will be a lot more complicated, and harder to understand. There’s a conceptual point, there, at the level of individual software instructions, where further reductionism doesn’t help to understand the phenomenon, and does make the analysis more complicated, and harder to work with.
A thorough electrical analysis is, of course, useful if one wishes to confirm that the stated behaviour of the basic software commands is both correctly stated, and free of unexpected side-effects. However, an attempt to describe (say) the rendering of a JPEG image in terms of which transistors are activated at which point is likely a futile exercise.
As an aside to an aside, I wonder how much information about the DNA reading frame could in principle be extracted from the DNA of a female organism, given the knowledge (or the assumption) that mature females can gestate a zygote? Almost all possible reading frames would be discardable on the grounds that the resulting organism would not be able to gestate a zygote, of course, but I don’t have any intuitive sense of how big the remaining search space would be.
And as a nod towards staying on topic:
Well, it will, and it won’t.
If what I mostly care about is the computer’s behavior at the level of instructions, then sure, understanding the instructions gets me most of the information that I care about. Agreed.
OTOH, if what I mostly care about is the computer’s behavior at the level of electrical flows through circuits (for example, if I’m trying to figure out how to hack the computer without an input device by means of electrical induction, or confirm that it won’t catch fire in ordinary use), then a thorough electrical analysis of the computer’s circuitry provides me with tons of indispensible new information.
What counts as “information” in a colloquial sense depends a lot on my goals. It might be useful to taboo the word in this discussion.
My intuition says “very, very big”. Consider: depending on womb conditions, the percentage of information expressed in the baby which is encoded in the DNA might change. As an extreme example, consider a female creature whose womb completely ignores the DNA of the zygote, creating instead a perfect clone of the mother. Such an example makes it clear that the search space is at least as large as the number of possible female creatures that are able to produce a perfect clone of themselves.
I accept your point. Such an analysis does provide a more complete view of the computer, which is useful in some circumstances.
Sure, I agree that one permissible solution is a decoder which produces an organism capable of cloning itself. And while I’m willing to discard as violating the spirit of the thought experiment decoder designs which discard the human DNA in its entirety and create a predefined organism (in much the same sense that I would discard any text-translation algorithm that discarded the input text and printed out the Declaration of Independence as a legitimate translator of the input text), there’s a large space of possibilities here.
Would you be willing to consider, i.e. not discard, a decoder that used the human DNA as merely a list of indexes, downloading the required genes from some sort of internal lookup table?
By changing the lookup table, one can dramatically change the resulting organism; and having a different result for every viable human DNA is merely a resut of having a large enough lookup table. It would be, to extend your metaphor, like a text-translation algorithm that returned the Declaration of Independance if given as input Alice in Wonderland, and returned Alice in Wonderland if given Hamlet.
(considers)
I would like to say “no”, but can’t think of any coherent reason to discard such a design.
Yeah, OK; point made.
Well, yes—but that arises from the fact that such devices are man-made, and (out of respect to our brains’ limitations) designed to isolate the layers of explanation from one another—to obviate the need for a fully reductionistic account. The argument will not apply to things not man-made.
Not true. There is a reason no one uses quarks to describe chemistry. Its futile to describe whats happening in a superfluid helium in terms of individual particle movement. Far better to use a two fluid model, and vortices.
Let me amend that: the argument will not necessarily apply to things not man-made. There is a categorical difference in this respect between man-made things and the rest, and my intent was to say: “if you’re going to put up an argument against reductionism, don’t use examples of man-made things”.
Whereas we have good reasons to bar “leaky abstractions” from our designs, Nature labors under no such constraint. If it turns out that some particular process that happens in a superfluid helium can be understood only by referring to the quark level, we are not allowed to frown at Nature and say “oh, poor design; go home, you’re drunk”.
For instance, it turns out we can almost describe the universe in the Newtonian model with its relatively simple equations, a nice abstraction if it were non-leaky, but anomalies like the precession of Mercury turn up that require us to use General Relativity instead, and take it into account when building our GPS systems.
The word “futile” in this context strikes me as wishful thinking, projecting onto reality our parochial notion of how complicated a reductionistic account of the universe “should” be. Past experience tells us that small anomalies sometimes require the overthrow of entires swathes of science, in the name of reductionism: there keep turning up cases where science considers it necessary, not futile, to work things out in terms of the lower levels of description.
I think you are making a bad generalization when you turn to Newtonian mechanics vs. general relativity. There are important ways in which mesons and hadron are emergent from quarks that have no correspondence to the relationship between Newtonian mechanics and GR.
As length scales increase, quarks go from being loosely bound fundamental degrees of freedom to not-even-good-degrees-of-freedom. At ‘normal’ length scales, free quarks aren’t even allowed. The modern study of materials is also full of examples of emergence (it underlies much work on renormalization groups), although its farther from my expertise so the only example to spring to mind was liquid helium.
The entire science of psychology is based on the idea that it is useful to apply high-level rules to the neural functioning of the human brain. If I decide to eat a cookie, then I explain it in high-level terms; I was hungry, the cookie smelt delicious. An analysis in terms of the effect of airborne particles originating from the cookie on my nasal passages, and subsequent alterations in the pattern of neural activations in my brain, can give a far more complicated answer to the question of why I ate the cookie; but, again, I don’t see how such a more complicated analysis would be better. If I want to understand my motivations more fully, I can do so in terms of mental biases, subconscious desires, and so forth; rather than a neuron-by-neuron analysis of my own brain.
And while it is technically true that I, as a human, am man-made (specifically, that I was made by my parents), a similar argument could be raised for any animal.
Such situations are rare, but not entirely unknown.
I disagree with your entire premise. I think we should pin down this concept of “levels of perspective” with some good jargon at some point, but regardless...
You can look at a computer from the level of perspective of “there are windows on the screen and I can move the mouse around. I can manipulate files on the hard drive with the mouse and the keyboard, and those changes will be reflected inside information boxes in the windows.” This is the perspective most people see a computer from, but it is not a complete description of a computer (i.e. if someone unfamiliar with the concept of computers heard this description, they could not build a computer from base materials.)
You might also see the perspective, “There are many tiny dots of light on a flat surface, lit up in various patterns. Those patterns are caused by electricity moving in certain ways through silica wires arranged in certain ways.” This is, I think, one level lower, but an unfamiliar person could not build a computer from scratch from this description.
Another level down, the description might be: “There is a CPU, which is composed of hundreds of thousands of transistors, arranged into logic gates such that when electricity is sent through them you can perform meaningful calculations. These calculations are written in files using a specific instruction set (“assembly language”). The files are stored on a disk in binary, with the disk containing many cesium atoms arranged in a certain order, which have either an extra electron or do not, representing 1 and 0 respectively. When the CPU needs to temporarily store a value useful in its calculations, it does so in the RAM, which is like the disk except much faster and smaller. Some of the calculations are used to turn certain square-shaped lights on a large flat surface blink in certain ways, which provides arbitrary information to the user”. We are getting to the point where an unfamiliar human might be able to recreate a computer from scratch, and therefore can be said to actually “understand” the system.
But still yet there are lower levels. Describing the actual logic gate organization in the CPU, the system used by RAM to store variables, how the magnetic needle accesses a specific bit on the hard drive by spinning it… All of these things must be known and understood in order to rebuild a computer from scratch.
Humans designed the computer at the level of “logic gates”, “bits on a hard drive”, “registries”, etc, and so it is not necessary to go deeper than this to understand the entire system (just as you don’t have to go deeper than “gears and cogs” to understand how a clock works, or how you don’t have to go deeper than “classical physics (billiards balls bouncing into each other)” to understand how a brain works.
But I hope that it’s clear that the mechanisms at the lower levels of a system completely contain within them the behavior of the higher levels of the system. There are no new behaviors which you can only learn about by studying the system from a higher level of perspective; those complicated upper-level behaviors are entirely formed by the simple lower-level mechanisms, all the way down to the wave function describing the entire universe.
That is what reductionism means. If you know the state of the entire wavefunction describing the universe, you know everything there is to know about the universe. You could use it to predict that, in some everette branches, the assassination of Franz Ferdinand on the third planet from the star Sol in the milky way galaxy would cause a large war on that planet. You could use it to predict the exact moment at which any particular “slice” of the wavefunction (representing a particular possible universe) will enter its maximum entropy state. You could use it to predict any possible behavior of anything and you will never be surprised. That is what it means to say that all of reality reduces down to the base-level physics. That is what it means to posit reductionism; that from an information theoretical standpoint, you can make entirely accurate predictions about a system with only knowledge about its most basic level of perspective.
If you can demonstrate to me that there is some organizational structure of matter which causes that matter to behave differently from what would be predicted by just looking at the matter in question without considering its organization (which would require, by the way, all of reality to keep track not only of mass and of velocity but also of its organizational structure relative to nearby reality), then I will accept such a demonstration as being a complete and utter refutation of reductionism. But there is no such behavior.
You are right; my example was a bad one, and it does not support the point that I thought it supported. The mere fact that something takes unreasonably long to calculate does not mean that it is not an informative endeavour. (I may have been working from a bad definition of reductionism).
Um. I suspect that this may have been poorly phrased. If I have a lump of carbon, quite a bit of water, and a number of other elements, and I just throw them together in a pile, they’re unlikely to do much—there may be a bit of fizzing, some parts might dissolve in the water, but that’s about it. Yet if I reorganise the same matter into a human, I have an organisation of matter that is able to enter into a debate about reductionism; which I don’t think can be predicted by looking at the individual chemical elements alone.
But that behaviour might still be predictable from looking at the matter, organised in that way, at its most basic level of perspective (given sufficient computing resources). Hence, I suspect that it is not a counter-example.
That’s a fusion of reductionism and determinism. Reductionism ins’t necessarily false in an indeterministic universe. What is more pertinent is being able to predict higher level properties and laws from lower level properties and laws. (synchronously, in the latter case).
No it isn’t? I did not mean you would be able to make predictions which came true 100% of the time. I meant that your subjective anticipation of possible outcomes would be equal to the probability of those outcomes, maximizing both precision and accuracy.
Yes it is.
“A property of a system is said to be emergent if it is in some sense more than the “sum” of the properties of the system’s parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to various sub-varieties of emergence.”—WP
Still deterinism, not reductionism. In a universe where
*1aTthere are lower-level-properties ..
*1b operating according to a set of deterministic laws.
*2a There are also higher-level properties..
*2b irreducible to and unpredictable from the lower level properties and laws...
*2c which follow their own deterministic laws.
You would be able to predict the future with complete accuracy, given both sets of laws and two sets of starting conditions. Yet the universe being described is explicitly non-reductionistic.
This all this means is that, in addition to the laws which govern low-level interactions, there are different laws which govern high-level interactions. But they are still laws of physics, they just sound like “when these certain particles are arranged in this particular manner, make them behave like this, instead of how the low-level properties say they should behave”. Such laws are still fundamental laws, on the lowest level of the universe. They are still a part of the code for reality.
But you are right:
Which is what I said:
Ergo, a reductionistic universe is also deterministic from a probabilistic standpoint, i.e. the lowest level properties and laws can tell you exactly what to anticipate, and with how much subjective probability.
Microphysical laws map microphysical states to other microphysical states.Top-down causation maps macrophysical states to microphysical states.
In the sense that they are irreducible, yes. In the sense that they are concerned only with microphyics, no.
“Deterministic” typically means that an unbounded agent will achieve probabilities of 1.0.
Can you name any examples of such a phenomenon?
Oh, well in that case quantum physics throws determinism out the window for sure. I still think there’s something to be said for correctly assigning subjective probabilities to your anticipations such that 100% of the time you think something will happen with a 50% chance, it happens half the time, i.e. you are correctly calibrated.
An unbounded agent in our universe would be able to achieve such absolutely correct calibration; that’s all I meant to imply.
I’m a bit confused. What exactly defines a “higher-level” property, if not that it can be reduced to lower-level properties?
eg: being macrscopic, featuring only in the special sciences
I did not say that non-reductionism is absurd. I said that “recognizing the absurdity of all other proposed hypotheses is another way of coming about the correct beliefs”.
Nonetheless, I do think that non-reductionism is absurd. I cannot imagine a universe which is not reductionistic.
One formulation of reductionism is that natural laws can be ordered in a hierarchy, with the higher-level laws being predictable from, or reducible to, the lower ones. So emergentism, in the cognate sense, not working would be that stack of laws failing to collapse down to the lowest level.
There’s two claims there: one contentious, one not. That there are multiply-realisable, substrate-independent higher-level laws is not contentious. For instance, wave equations have the same form for water waves, sound waves and so on. The contentious claim is that this is ipso facto top-down causation. Substrate-independent laws are still reducible to substrates, because they are predictable from the behaviour of their substrates.
I don’t see how that refutes the above at all. For one thing, Laughlin and Ellis do have detailed examples of emergent laws (in their rather weak sense of “emergent”). For another, they are not calling on emergence itself as doing any explaining. “Emergence isn’t explanatory” doesn’t refute “emergence is true”. For a third, I don’t see any absurdity here. I see a one-word-must-have-one-meaning assumption that is clouding the issue. But where a problem is so fuzzilly defined that it is hard even to identify the “sides”, then one can’t say that one side is “absurd”.
Neither are supposed to make predictions. Each can be considered a methodology for finding laws, and it is the laws that do the predicting. Each can also be seen as a meta-level summary fo the laws so far found.
EY can’t do that for MWI either. Maybe it isn’t all about prediction.
That’s robustly true. Genetic code has to be interpreted by a cellular environment. There are no self-decoding codes.
Reudctionism is an approach that can succeed or fail. It isn’t true apriori. If reductionism failed, would you say that we should not even contemplate non-reductionism? Isn’t that a bit like eEinstein’s stubborn opposition to QM?
I suppose you mean that the reductionistic explanation isn’t always the most complete explanation...well everything exists in a context.
There is no apriori guarantee that such an explanation will be complete.
That isn’t the emergentist claim at all.
Why? Because you described them as “laws of physics”? An emergentist wouldn’t. Your objections seem to assume that some kind of reductionism+determinism combination is true ITFP. That’s just gainsaying the emergentist claim.
If there is top-down causation, then its laws must be couched in terms of lower-level AND higher-level properties. And are therefore not reductionistic. You seem to be tacitly assuming that there are no higher-level properties.
Cross-level laws aren’t “laws of physics”. Emergentists may need to assume that microphysical laws have “elbow room”, in order to avoid overdetermination, but that isn’t obviously wrong or absurd.
As it happens, no-one does. That objections was made in the most upvoted response to his article.
Can you predict qualia from brain-states?
Mechanisms have to break down into their components because they are built up from components. And emergentists would insist that that does not generalise.
Or as a hint about how to go about understanding them.
That’s not what E-ism says at all.
That’s an outcome you would get with common or garden indeterminism. Again: reductionism is NOT determinism.
What’s supposed to be absurd there? Top-down causation, or top-down causation that only applies to DNA?
The arguments for emergence tend not be good. Neither are the arguments against. A dippsute about a poorly-defined distinction wit poor arguments on both sides isn’t a dispute where one side is “absurd”.
I don’t think you understand Laughlin’s point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can’t understand their differences by by going to a lower level of description.
Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.
Why didn’t he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).
Also, I’m pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can’t make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that’s what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don’t know exactly what “individual behavior” could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.
Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you’ll find that your measurements agree with this: 50% of the time you’ll get the first measurement, and 50% of the time you’ll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn’t show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.
Besides, it’s not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.
I don’t know why Laughlin wrote what he did, you didn’t link to the paper. However, he comes from a world where solid state physics is obvious, and “everyone knows” various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics “emergent” is a technical, defined concept.
Your explanation of superfluid helium isn’t coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn’t sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don’t think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
I’ll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what’s happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.
If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.
With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?
That’s what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?
I probably don’t. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for “Spontaneous Symmetry Breaking” is called “A pedagogical example: the Mexican hat potential”, and so I cannot be entirely off.
In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you’s (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn’t broken.
Since you’re a particle physicist and you disagree with this outlook, I’m sure there’s something wrong with it, though.
Is this similar to saying that when you are modeling how an airplane flies, you don’t need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about “air pressure”, and your model will still be accurate? I agree with you; modeling every single particle when you’re trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.
But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.
Do you disagree?
So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of ‘emergence’ as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.
A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.
In a sense this
seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.
Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like “examples of top down causality”, “against reductionism”, “nonreductionist explanation of”), and indeed I have a hard time finding anything relevant at all, but I can’t find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.
The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.
In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.
Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I’m likely to interpret downvotes in a noisy, haphazard way.
From a “purely reductionist stand-point” you would still need to know the initial conditions to predict how the system evolves. Yet you act as if this is a knockdown argument against reductionism.
I was just trying to make my point clearer, its suggestive, not a knock out. I think the knock-out argument for a strict reductionism is the renormalization argument.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
Your argument was not even suggestive, it was just wrong, because it ignores that a reductionist account would look at the initial conditions.
I don’t think that anyone is arguing that modeling physics at a high level of abstraction is not useful. It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
The point/thrust of JohnWittle’s that I’m arguing against is that the idea of emergent phenomena is inherently silly/stupid, and a ‘magical word’ to gloss over fuzzy thinking. I chose two very different systems in an attempt to show how incredibly sensitive to initial conditions physics can be, which makes the reductionist account (in many instances) the wrong approach. I apologize if this was not clear (and if you were a downvoter, I sincerely appreciate the feedback). Is my point more clear now? (I have resisted the urge to rephrase my original to try to add clarity)
I also purposely chose two systems I believe have emergent behavior (super fluid helium certainly does, biological entities/bacteria were postulated to by Laughlin). Originally I was going to say more about superfuid helium before I realized how much I was going to have to write and decided spontaneous symmetry breaking was much clearer.
Sure, but also its important to remember that there exist aggregate behaviors that don’t depend on the microphysics in a meaningful way (the high energy modes decouple and integrate out entirely), and as such can only be meaningfully understood in the aggregate. This is a different issue than the Newtonian physics/GR issue (Newtonian mechanics is a limit of GR, not an emergent theory based on GR, the degrees of freedom are the same).
My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that’s only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.
When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
No one knows how to define a quark at human length scales, they aren’t meaningful degrees of freedom.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.