When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.
When whowhowho posted a list of a couple names of people who don’t like reductionism, I said to myself “if reductionism is right, I want to believe reductionism is right. If reductionism is wrong, I want to believe reductionism is wrong” etc. I then went and googled those names, since those people are smart people, and found a paper published by the first name on the list. The main arguments of the paper were, “solid state physicists don’t believe in reductionism”, “consciousness is too complex to be caused by the interactions between neurons”, and “biology is too complex for DNA to contain a complete instruction set for cells to assemble into a human being”. Since argument screens off authority and the latter two arguments are wrong, I kept my belief.
EHeller apparently has no argument with reductionism, except that it isn’t a “good way to solve problems”, which I agree entirely: if you try to build an airplane by modeling air molecules it will take too long. But that doesn’t mean that if you try to build an airplane by modeling air molecules, you will get a wrong answer. You will get the right answer. But then why did EHeller state his disagreement?
The paper uses emergent in exactly the way that EY described in the Futility of Emergence, and I was surprised by that, since when I first read The Futility of Emergence I thought that EY was being stupid and that there’s no way people could actually make such a basic mistake. But they do! I had no idea that people who reject reductionism actually use arguments like “consciousness is an emergent phenomenon which cannot be explained by looking at the interaction between neurons”. They don’t come out and say “top-down causality”, which really is a synonym for magic, like EHeller did, but they do say “emergence”.
When I downvoted, it was after I had made sure I understood spontaneous symmetry breaking, and that it was not top-down causality, since that was the argument EHeller presented that I took seriously. I think fewer people believe in reductionism just because of EY than you think.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
No one knows how to define a quark at human length scales, they aren’t meaningful degrees of freedom.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
It would be helpful, since you keep bringing up arguments that are in this paper if you provide a link to the paper in question.
I feel like I have been misunderstood, and we are discussing this in other branches of the thread, but I can’t help but feel like Laughlin has also been misunderstood but I can’t judge if you don’t provide a link.