I don’t think you understand Laughlin’s point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can’t understand their differences by by going to a lower level of description.
Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.
Why didn’t he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).
Also, I’m pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can’t make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that’s what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don’t know exactly what “individual behavior” could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.
Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you’ll find that your measurements agree with this: 50% of the time you’ll get the first measurement, and 50% of the time you’ll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn’t show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.
Besides, it’s not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.
I don’t know why Laughlin wrote what he did, you didn’t link to the paper. However, he comes from a world where solid state physics is obvious, and “everyone knows” various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics “emergent” is a technical, defined concept.
Your explanation of superfluid helium isn’t coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn’t sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don’t think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.
I’ll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what’s happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.
If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.
With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?
That’s what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?
I don’t think you understand what spontaneous symmetry breaking is
I probably don’t. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for “Spontaneous Symmetry Breaking” is called “A pedagogical example: the Mexican hat potential”, and so I cannot be entirely off.
In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you’s (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn’t broken.
Since you’re a particle physicist and you disagree with this outlook, I’m sure there’s something wrong with it, though.
In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter.
Is this similar to saying that when you are modeling how an airplane flies, you don’t need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about “air pressure”, and your model will still be accurate? I agree with you; modeling every single particle when you’re trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.
But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.
So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of ‘emergence’ as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.
A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.
In a sense this
if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this.
seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.
there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation
Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like “examples of top down causality”, “against reductionism”, “nonreductionist explanation of”), and indeed I have a hard time finding anything relevant at all, but I can’t find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.
The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.
In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.
Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I’m likely to interpret downvotes in a noisy, haphazard way.
Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should.
From a “purely reductionist stand-point” you would still need to know the initial conditions to predict how the system evolves. Yet you act as if this is a knockdown argument against reductionism.
I was just trying to make my point clearer, its suggestive, not a knock out. I think the knock-out argument for a strict reductionism is the renormalization argument.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
Your argument was not even suggestive, it was just wrong, because it ignores that a reductionist account would look at the initial conditions.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
I don’t think that anyone is arguing that modeling physics at a high level of abstraction is not useful. It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
The point/thrust of JohnWittle’s that I’m arguing against is that the idea of emergent phenomena is inherently silly/stupid, and a ‘magical word’ to gloss over fuzzy thinking. I chose two very different systems in an attempt to show how incredibly sensitive to initial conditions physics can be, which makes the reductionist account (in many instances) the wrong approach. I apologize if this was not clear (and if you were a downvoter, I sincerely appreciate the feedback). Is my point more clear now? (I have resisted the urge to rephrase my original to try to add clarity)
I also purposely chose two systems I believe have emergent behavior (super fluid helium certainly does, biological entities/bacteria were postulated to by Laughlin). Originally I was going to say more about superfuid helium before I realized how much I was going to have to write and decided spontaneous symmetry breaking was much clearer.
It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
Sure, but also its important to remember that there exist aggregate behaviors that don’t depend on the microphysics in a meaningful way (the high energy modes decouple and integrate out entirely), and as such can only be meaningfully understood in the aggregate. This is a different issue than the Newtonian physics/GR issue (Newtonian mechanics is a limit of GR, not an emergent theory based on GR, the degrees of freedom are the same).
My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that’s only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.
When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.
I don’t think you understand Laughlin’s point at all. Compare a small volume of superlfuid liquid helium, and a small volume of water with some bacteria in it. Both systems have the exact same hamiltonian, both systems have roughly the same amount of the same constituents (protons,neutrons,electrons) but the systems behave vastly differently. We can’t understand their differences by by going to a lower level of description.
Modern material science/solid state physics is the study of the tremendous range of different, complex behaviors that can arise from the same Hamiltonians. Things like spontaneous symmetry breaking are rigorously defined, well-observed phenomena that depend on aggregate, not individual behavior.
Why didn’t he mention superfluidity, or solid state physics, then? The two examples he listed were consciousness not being explainable from a reductionist standpoint, and DNA not containing enough information to come anywhere near being a complete instruction set for building a human (wrong).
Also, I’m pretty sure that the superfluid tendencies of liquid helium-4 come from the fact that it is composed of six particles (two proton, two neutron, two electron), each with half-integer spin. Because you can’t make 6 halves add up to anything other than a whole number, quantum effects mean that all of the particles have exactly the same state and are utterly indistinguishable, even positionally, and that’s what causes the strange effects. I do not know exactly how this effect reduces down to individual behavior, since I don’t know exactly what “individual behavior” could mean when we are talking about particles which cannot be positionally distinguished, but to say that superfluid helium-4 and water have the exact same hamiltonian is not enough to say that they should have the same properties.
Spontaneous symmetry breaking can be reduced down to quantum mechanics. You might solve a field equation and find that there are two different answers as to the mass of two quarks. In one answer, quark A is heavier than quark B, but in the other answer, quark B is heavier than quark A, and you might call this symmetry breaking, but just because when you take the measurement you get one of the answers and not the other, does not mean that the symmetry was broken. The model correctly tells you to anticipate either answer with 1:1 odds, and you’ll find that your measurements agree with this: 50% of the time you’ll get the first measurement, and 50% of the time you’ll get the second measurement. In the MW interpretation, symmetry is not broken. The measurement doesn’t show what really happened, it just shows which branch of the wavefunction you ended up in. Across the entire wavefunction, symmetry is preserved.
Besides, it’s not like spontaneous symmetry breaking is a behavior which arises out of the organization of the particles. It occurs at the individual level.
I don’t know why Laughlin wrote what he did, you didn’t link to the paper. However, he comes from a world where solid state physics is obvious, and “everyone knows” various things (emergent properties of superfuid helium, for instance). Remember, his point of reference of a solid state physicist is quite different than the non-specialist so there is a huge inferential distance. Also remember that in physics “emergent” is a technical, defined concept.
Your explanation of superfluid helium isn’t coherent ,and I had a book length post type up, when a simpler argument presented itself. Water with bacteria and liquid helium have the same Hamiltonian, AND the same constituent particles. If I give you a box and say “in this box there are 10^30 protons, 10^30 neutrons and 10^30 electrons,” you do not have enough information to tell me how the system behaves, but from a purely reductionist stand-point, you should. If this doesn’t sway you, lets agree to disagree because I think spontaneous symmetry breaking should be enough to make my point, and its easier to explain.
I don’t think you understand what spontaneous symmetry breaking is, I have very little idea what you are talking about. Lets ignore quantum mechanics for the time being, because we can describe whats happening on an entirely classical level. Spontaneous symmetry breaking arises when the hamiltonian has a symmetry that the aggregate ground-state does not. Thats the whole definition, and BY DEFINITION it depends on details of the aggregate ground state and the organization of the particles.
And finally you can rigorously prove via renormalization group methods that in many systems the high energy degrees of freedom can be averaged out entirely and have no effect on the form of low-energy theory. In these systems, to describe low energy structures in such theories (most theories) the details of the microphysics literally do not matter. Computational physicists use this to their advantage all the time- if they want to look at meso- or macro- scale physics they assume very simple micromodels that are easy to simulate, instead of realistic ones, and are fully confident they get the same meso and macro results.
I’ll admit that I am not a PhD particle physicist, but what you describe as reductionism is not what I believe to be true. If we ignore quantum physics, and describe what’s happening on an entirely classical level, then we can reduce the behavior of a physical system down to its most fundamental particles and the laws which govern the interactions between those basic particles. You can predict how a system will behave by knowing about the position and the velocity of every particle in the system; you do not have to keep separate track of an organizational system as a separate property, because the organization of a physical system can be deduced from the other two properties.
If reductionism, to you, means that by simply knowing the number of electrons, protons, and neutrons which exist in the universe, you should be able to know how the entire universe behaves, then I agree: reductionism is false.
With that in mind, can you give an example of top-down causality actually occurring in the universe? A situation where the behavior of low-level particles interacting cannot predict the behavior of systems entirely composed of those low-level particles, but instead where the high-level organization causes the interaction between the low-level particles to be different?
That’s what I think reductionism is: you cannot have higher-level laws contradict lower-level laws; that when you run the experiment to see which set of laws wins out, the lower-level laws will be correct every single time. Is this something you disagree with?
I probably don’t. I was going based off of an AP Physics course in highschool. My understanding is basically this: if you dropped a ball perfectly onto the top of a mexican hat, symmetry would demand that all of the possible paths the ball could take are equally valid. But in the end, the ball only chooses one path, and which path it chose could not have been predicted from the base-level laws. A quick look at wikipedia confirms that this idea at least has something to do with symmetry breaking, since one of the subsections for “Spontaneous Symmetry Breaking” is called “A pedagogical example: the Mexican hat potential”, and so I cannot be entirely off.
In classical physics, the ball actually takes one path, and this path cannot be predicted in advance. But in QM, the ball takes all of the paths, and different you’s (different slices of the wavefunction which evolved from the specific neuron pattern you call you), combined, see every possible path the ball could have taken, and so across the wavefunction symmetry isn’t broken.
Since you’re a particle physicist and you disagree with this outlook, I’m sure there’s something wrong with it, though.
Is this similar to saying that when you are modeling how an airplane flies, you don’t need to model each particular nitrogen atom, oxygen atom, carbon atom, etc in the air, but can instead use a model which just talks about “air pressure”, and your model will still be accurate? I agree with you; modeling every single particle when you’re trying to decide how to fly your airplane is unnecessary and you can get the job done with a more incomplete model. But that does not mean that a model which did model every single atom in the air would be incorrect; it just does not have a large enough effect on the airplane to be noticeable. Indeed, I can see why computational physicists would use higher level models to their advantage, when such high level models still get the right answer.
But reductionism simply says that there is no situation where a high level model could get a more accurate answer than a low level model. The low level model is what is actually happening. Newtonian mechanics is good enough to shoot a piece of artillery at a bunker a mile away, but if you wanted to know with 100% accuracy where the shell was going to land, you would have to go further down than this. The more your model breaks macroscopic behavior down into the interactions between its base components, the closer your model resembles the way reality actually works.
Do you disagree?
So I think perhaps we are talking past each other. In particular, my definition of reductionism is that we can understand and model complex behavior by breaking a problem in to its constituent components and studying them in isolation. i.e. if you understand the micro-hamiltonian and the fundamental particles well, you understand everything. The idea of ‘emergence’ as physicists understand it (and as Laughlin was using it), is that there are aggregate behaviors that cannot be understood from looking at the individual constituents in isolation.
A weaker version of reductionism would say that to make absolutely accurate predictions to some arbitrary scale we MUST know the microphysics. Renormalization arguments ruin this version of reductionism.
In a sense this
seems to be espousing this form of reductionism, which I strongly disagree with. There exist physical theories where knowing microphysics is irrelevant to arbitrarily accurate predictions. Perhaps it would be best to agree on definitions before we make points irrelevant to each other.
Can you give me an example of one of these behaviors? Perhaps my google-fu is weak (I have tried terms like “examples of top down causality”, “against reductionism”, “nonreductionist explanation of”), and indeed I have a hard time finding anything relevant at all, but I can’t find a single clearcut example of behavior which cannot be understood from looking at the individual constituents in isolation.
The fore-mentioned spontaneous symmetry breaking shows up in a wide variety of different systems. But, phase changes in general are probably good examples.
In a general pattern, I find in a lot of my physics related posts I receive downvotes (both my posts in this very thread), and then I request an explanation for why, and no one responds, and then I receive upvotes. What I really want is just for the people giving the downvotes to give me some feedback.
Physics was my phd subject, and I believe that what I offer to the community is an above-average knowledge of the subject. If you believe my explanation is poorly thought out, incoherent or just hard to parse, please downvote, but let me know what it is thats bugging you. I want to be effectively communicating, and without feedback from the people who think my above post is not helpful, I’m likely to interpret downvotes in a noisy, haphazard way.
From a “purely reductionist stand-point” you would still need to know the initial conditions to predict how the system evolves. Yet you act as if this is a knockdown argument against reductionism.
I was just trying to make my point clearer, its suggestive, not a knock out. I think the knock-out argument for a strict reductionism is the renormalization argument.
Also, my training is particle physics, so I have no problem with reductionism in general, simply that as an approach its not a great way to understand many problems, and the post I responded to didn’t seem to understand that solid state physicists use emergent as more of a term-of-art than a ‘magical term.’
Your argument was not even suggestive, it was just wrong, because it ignores that a reductionist account would look at the initial conditions.
I don’t think that anyone is arguing that modeling physics at a high level of abstraction is not useful. It’s just that the abstract models are computational shortcuts, and where they disagree with less abstract models, the less abstract models will be more accurate.
The point/thrust of JohnWittle’s that I’m arguing against is that the idea of emergent phenomena is inherently silly/stupid, and a ‘magical word’ to gloss over fuzzy thinking. I chose two very different systems in an attempt to show how incredibly sensitive to initial conditions physics can be, which makes the reductionist account (in many instances) the wrong approach. I apologize if this was not clear (and if you were a downvoter, I sincerely appreciate the feedback). Is my point more clear now? (I have resisted the urge to rephrase my original to try to add clarity)
I also purposely chose two systems I believe have emergent behavior (super fluid helium certainly does, biological entities/bacteria were postulated to by Laughlin). Originally I was going to say more about superfuid helium before I realized how much I was going to have to write and decided spontaneous symmetry breaking was much clearer.
Sure, but also its important to remember that there exist aggregate behaviors that don’t depend on the microphysics in a meaningful way (the high energy modes decouple and integrate out entirely), and as such can only be meaningfully understood in the aggregate. This is a different issue than the Newtonian physics/GR issue (Newtonian mechanics is a limit of GR, not an emergent theory based on GR, the degrees of freedom are the same).
My experience indicates that a vaguely anti-Eliezerish post, like someone questioning his orthodox reductionism, MWI or cryonics, gets an initial knee-jerk downvote, probably (that’s only an untested hypothesis) from those who think that the matter is long settled and should not be brought up again. Eventually a less-partial crowd reads it and it maybe upvoted or downvoted based on merits, rather on the degree of conformance. Drawing attention to the current total vote is probably likely to cause this moderate crowd to actually vote, one way or another.
When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
No one knows how to define a quark at human length scales, they aren’t meaningful degrees of freedom.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.