Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.
Let’s start with something rather uncontroversial. We would probably all agree that reducing top-down a complex system, like a brain or a star, we find its increasingly small constituents interacting with other constituents, all the way down to the lowest accessible level. It is also largely uncontroversial that if we were to carefully put together all the constituents exactly the way they were before will give us the original system, or something very close to it. Note the emphasis.
However, this is not what EHeller and myself mean by emergence. This analysis/synthesis process is rather unpredictable and unstable in both directions. The same high-level behavior can be produced by wildly varying constituents, and this variation can happen at multiple levels. For example, ripples in a bowl of liquid can be produced by water, or by something else, or they might be a mirror image of an actual bowl, or they might be a video someone recorded, or a computer simulation, or a piece of fabric in the wind producing a similar effect, etc. You won’t know until you start digging. Looking bottom-up, I’d call it “emergence convergence”.
If you look at the synthesis part, you will find that rather tiny variations in how you put things together result in enormous changes in the high-level behavior. A minor variation in the mass of one quark (which is probably determined by some hard-to-calculate term in some QFT equation) would result in a totally different universe. An evolution of the same initial conditions is likely to produce different result in the one world you care about (but let’s not get sidetracked by MWI). Laplace-style determinism has never worked in practice on any scale, as far as I know. Well, maybe there are some exceptions, I’m not sure. Anyway, my point is that what emerges from putting lots of similar things together in various ways is quite unpredictable, though you often can analyze it in retrospect.
While “emergence” is not a good explanation of anything, it happens often enough, people ought to expect it, so it has some predictive power. 2+2 might be 4, but 2+2+2+...+2 might not even be a number anymore. Like when these twos are U-235 atoms. When you get lots of people together, you might get a mindless mob, or you might get an army, you won’t know the first time you try. So, while emergence is not a good explanation, the idea is useful in a Hegel-like meta way: expect qualitative jumps simply from accumulating quantitative changes. Ignore this inevitability at your own peril.
I have no disagreement that high level behaviors are wildly variable, unpredictable, and all of the other words which mean “difficult to reduce down to lower level behaviors”. Yes, wildly different constituent parts can create the same macroscopic behavior, or changing just a single lower level property in a system can cause the system to be unrecognizably different from before. But my point is that, if the universe is a physics simulator, it only has to keep track of the quarks. When I wake up in the morning, the universe isn’t running a separate “human wake-up” program which tells the quarks how to behave; it’s just running the standard “quark” program that it runs for all quarks. That’s all it ever has to run. That’s all I’m saying, when I say that I believe in reductionism. Reductionism doesn’t say that it’s practical for us to think in those terms, just that the universe thinks in those terms.
Finding a counterexample to this, a time when if our universe is a physics simulator, it must run code other than one process of ‘quark.c’ for each quark, would be a huge blow to reductionism, and I don’t think one has been found yet. Perhaps I am wrong, although I have looked pretty thoroughly at this point, as I continue to google “arguments against reductionism” and find that none of them can actually give an example of such top-down causality.
No one knows how to define a quark at human length scales, they aren’t meaningful degrees of freedom.
First, I am not comfortable modeling the universe as a computer program, because it’s implicitly dualist, with the program separate from the underlying “hardware”. Or maybe even “trialist”, if you further separate the hardware from the entity deciding what program to run. While this may well be the case (the simulation argument), at this point we have no evidence for it. So please be aware of the limitations of this comparison.
Second, how would you tell the difference between the two cases you describe? What would be an observable effect of running the “”human wake-up” program which tells the quarks how to behave”? If you cannot tell the difference, then all you have left is the Bayesian inference of the balance of probabilities based on Occam’s razor, not any kind of certainty. Ellis and Co. actually argue that agency is an example of “top-down causality” (there would be no nuclei colliding in the LHC if humans did not decide to build it to begin with). I am not impressed with this line of reasoning, precisely because when you start investigating what “decide” means, you end up having to analyze humans in terms of lower-level structures, anyway.
Third, (explicitly, not tacitly) adopting the model of the universe as a computer program naturally leads to the separation of layers: you can run the Chemistry app on the Atomic Physics API, and you don’t care what the implementation of the API is. That API is rather poorly designed and leaky, so we can infer quite a bit about its innards from the way it behaves. Maybe it was done by a summer student or something. There are some much nicer API inside. For example, whoever wrote up “electron.c” left very few loose ends: it has mass, charge and spin, but no size to probe and apparently no constituents. The only non-API hook into the other parts if the system is its weak isospin. Or maybe this was intentional, too. OK, it’s time to stop anthropomorphizing.
Anyway, I tend to agree that it seems quite likely that there is no glitch in the matrix and we probably have only a single implementation instance of the currently-lowest-known-level API (the Standard Model of Particle Physics), but this is not true higher up. The same API is often reused at many different levels and for many different implementations, and insisting that there is only a single true top-down protocol stack implementation is not very productive.