After trying to discover why the LW wiki “definition” of Reductionism appeared so biased, I concluded from the responses that it was never really intended as a definition of the Reductionist position itself, but as a summary of what is considered to be wrong with positions critical of Reductionism.
The argument goes like this. “Emergentism”, as the critical view is often called, points out the properties that emerge from a system when it is assembled from its elements, which do not themselves show such a property. From such considerations it points out various ways in which research programmes based on a reductionist approach may distort priorities and underestimate difficulties. So far, this is all a matter of degree and eventually each case must be settled on its merits. However, it gets philosophically sensitive when Emergentists claim that a Reductionist approach may be unable in principle to ‘explain’ certain emergent properties.
The reponse to this claim (I think) goes like this. (1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).
At this point it is important to distinguish “Mind theory” from other fields where Reductionism is debated. In this field, Reductionists apparently regard Emergentism as a form of disguised Vitalism/Dualism—if emergent properties can’t be explained by the physical ingredients, they must exist in some non-physical realm. However, Emergentism can apply equally well to everything from chess playing programs to gearbox vibrations, neither of which involve anything like mysterious spiritual substances, so this can hardly be the whole story. And in fact I would argue that the reverse is the case: Vitalists or “substance Dualists” are actually unconscious Reductionists as well: when they assume an extra ingredient is necessary to account for the things which they believe Physicalism cannot explain, they are still reducing a system to its ingredients. Emergentists by contrast reject premise (1) of the previous paragraph, that the explanatory power of a model is a function of its ingredients. Thus it seems to me that the real difference between Reductionists & Emergentists is a difference over the nature of explanation. So it seems worthwhile looking into some of the different things that can be meant by “explanation”.
For simplicity, let us illustrate this by the banal example of a brickwork bridge. The elements are the bricks and their relative positions. Our reductionist R points out that these are the only elements you need—after all, if you remove all the bricks there is nothing left—and so proposes to become an expert in bricks. Our (Physicalist) Emergentist E suggests that this won’t be of much use without a knowledge of the Arch (an emergent feature). R isn’t stupid and agrees that this would be extremely useful but points out that if no expert in Arch Theory is to hand, given the very powerful computer available, such expertise isn’t strictly necessary: it’s not an inherent requirement. Simply solving the force balance equations for each brick will establish whether a given structure will fall into the river. Isn’t that an explanation?
Not in my sense, says E, as to start with it doesn’t tell me how the bridge will be designed, only how an existing design will be analysed. So R explains that the computer will generate structures randomly until one is found that satisfies the requirements of equilibrium. When E enquires how stability will be checked. R replies that the force balance will be checked under all possible small deviations from the design position.
E isn’t satisfied. To claim understanding, R must be able to apply the results of the first design to new bridges of different span, but all (s)he can do is repeat the process again every time.
On the contrary, replies R, this being the age of Big Data, the computer can generate solutions in a large number of cases and then use pattern recognition software to extract rules that can be applied to new cases.
Ah, says E, but explaining these rules means hypothesing more general rules from which these rules can be derived, using appropriate Bayesian reasoning to confirm your hypothesis.
OK, replies R, my program has a heuristic feature that has passed the Turing Test. So anything you can do along these lines, it can do just as well.
So using R’s approach, explanation even in E’s most general sense can always be arrived at by a four-stage process: (1) construct a model using the basic elements applicable to the situation, (2) fill a substantial chunk of solution space, (3) use pattern recognition to extract pragmatic rules, (4) use hypothesis generation and testing to derive general principles from the rules. It may be a trivial illustration, but it seems to me that in a broad sense this sort of process must be applicable in almost any situation.
How should we interpret this conclusion? R would say that it proves that “explanation” can be arrived it using a Reductionist model. E would say it proves the inadequacy of Reductionism, since Reductionist steps (1) & (2) have to be supplemented by Integrationist steps (3) & (4): the rules found at step (3) are precisely “emergent features” of the solution space. Moreover, pattern recognition is not a closed-form process with repeatable results. (Is it?) On the other hand the patterns identified in solution space might well be derivable in closed form directly from higher-level characteristics of the system in question (such as constraints in the system).
I would say that the choice of interpretation is a matter of convention, though I own up that I find the Emergentist mind-set more helpful in the fields I have learnt something about. What really matters is a recognition of the huge difference between “providing a solution” and “generalising from solution space” as types of explanation. The “Emergentist” label is a reminder of that difference. But call yourself a “Reductionist” if you like so long as you acknowledge the difference.
It seems to me that the sort of argument sketched here provides useful pointers to help recognize when “Reductionism” becomes “Greedy Reductionism”(A). For example, consider the claim that mapping the Human Connectome will enable the workings of the brain to be explained. Clearly, the mapping is just step (1). Consider the size of the Connectome, and then consider the size of the solution space of its activity. That makes step (1) sound utterly trivial compared with step (2). This leaves the magnitude of steps (3) & (4) to be evaluated. That doesn’t mean the project won’t be extremely valuable, but it puts the time-frame of the claim to provide real “understanding” into a very different light, and underlines the continued value of working at other scales as well.
The real difference between Reductionism and Emergentism
After trying to discover why the LW wiki “definition” of Reductionism appeared so biased, I concluded from the responses that it was never really intended as a definition of the Reductionist position itself, but as a summary of what is considered to be wrong with positions critical of Reductionism.
The argument goes like this. “Emergentism”, as the critical view is often called, points out the properties that emerge from a system when it is assembled from its elements, which do not themselves show such a property. From such considerations it points out various ways in which research programmes based on a reductionist approach may distort priorities and underestimate difficulties. So far, this is all a matter of degree and eventually each case must be settled on its merits. However, it gets philosophically sensitive when Emergentists claim that a Reductionist approach may be unable in principle to ‘explain’ certain emergent properties.
The reponse to this claim (I think) goes like this. (1) The explanatory power of a model is a function of its ingredients. (2) Reductionism includes all the ingredients that actually exist in the real world. Therefore (3) Emergentists must be treating the “emergent properties” as extra ingredients, thereby confusing the “map” with the “territory”. So Reductionism is defined by EY and others as not treating emergent properties as extra ingredients (in effect).
At this point it is important to distinguish “Mind theory” from other fields where Reductionism is debated. In this field, Reductionists apparently regard Emergentism as a form of disguised Vitalism/Dualism—if emergent properties can’t be explained by the physical ingredients, they must exist in some non-physical realm. However, Emergentism can apply equally well to everything from chess playing programs to gearbox vibrations, neither of which involve anything like mysterious spiritual substances, so this can hardly be the whole story. And in fact I would argue that the reverse is the case: Vitalists or “substance Dualists” are actually unconscious Reductionists as well: when they assume an extra ingredient is necessary to account for the things which they believe Physicalism cannot explain, they are still reducing a system to its ingredients. Emergentists by contrast reject premise (1) of the previous paragraph, that the explanatory power of a model is a function of its ingredients. Thus it seems to me that the real difference between Reductionists & Emergentists is a difference over the nature of explanation. So it seems worthwhile looking into some of the different things that can be meant by “explanation”.
For simplicity, let us illustrate this by the banal example of a brickwork bridge. The elements are the bricks and their relative positions. Our reductionist R points out that these are the only elements you need—after all, if you remove all the bricks there is nothing left—and so proposes to become an expert in bricks. Our (Physicalist) Emergentist E suggests that this won’t be of much use without a knowledge of the Arch (an emergent feature). R isn’t stupid and agrees that this would be extremely useful but points out that if no expert in Arch Theory is to hand, given the very powerful computer available, such expertise isn’t strictly necessary: it’s not an inherent requirement. Simply solving the force balance equations for each brick will establish whether a given structure will fall into the river. Isn’t that an explanation?
Not in my sense, says E, as to start with it doesn’t tell me how the bridge will be designed, only how an existing design will be analysed. So R explains that the computer will generate structures randomly until one is found that satisfies the requirements of equilibrium. When E enquires how stability will be checked. R replies that the force balance will be checked under all possible small deviations from the design position.
E isn’t satisfied. To claim understanding, R must be able to apply the results of the first design to new bridges of different span, but all (s)he can do is repeat the process again every time.
On the contrary, replies R, this being the age of Big Data, the computer can generate solutions in a large number of cases and then use pattern recognition software to extract rules that can be applied to new cases.
Ah, says E, but explaining these rules means hypothesing more general rules from which these rules can be derived, using appropriate Bayesian reasoning to confirm your hypothesis.
OK, replies R, my program has a heuristic feature that has passed the Turing Test. So anything you can do along these lines, it can do just as well.
So using R’s approach, explanation even in E’s most general sense can always be arrived at by a four-stage process: (1) construct a model using the basic elements applicable to the situation, (2) fill a substantial chunk of solution space, (3) use pattern recognition to extract pragmatic rules, (4) use hypothesis generation and testing to derive general principles from the rules. It may be a trivial illustration, but it seems to me that in a broad sense this sort of process must be applicable in almost any situation.
How should we interpret this conclusion? R would say that it proves that “explanation” can be arrived it using a Reductionist model. E would say it proves the inadequacy of Reductionism, since Reductionist steps (1) & (2) have to be supplemented by Integrationist steps (3) & (4): the rules found at step (3) are precisely “emergent features” of the solution space. Moreover, pattern recognition is not a closed-form process with repeatable results. (Is it?) On the other hand the patterns identified in solution space might well be derivable in closed form directly from higher-level characteristics of the system in question (such as constraints in the system).
I would say that the choice of interpretation is a matter of convention, though I own up that I find the Emergentist mind-set more helpful in the fields I have learnt something about. What really matters is a recognition of the huge difference between “providing a solution” and “generalising from solution space” as types of explanation. The “Emergentist” label is a reminder of that difference. But call yourself a “Reductionist” if you like so long as you acknowledge the difference.
It seems to me that the sort of argument sketched here provides useful pointers to help recognize when “Reductionism” becomes “Greedy Reductionism”(A). For example, consider the claim that mapping the Human Connectome will enable the workings of the brain to be explained. Clearly, the mapping is just step (1). Consider the size of the Connectome, and then consider the size of the solution space of its activity. That makes step (1) sound utterly trivial compared with step (2). This leaves the magnitude of steps (3) & (4) to be evaluated. That doesn’t mean the project won’t be extremely valuable, but it puts the time-frame of the claim to provide real “understanding” into a very different light, and underlines the continued value of working at other scales as well.
(A): See e.g. fubarobfusco’s comment on my earlier discussion.