Reductionist research strategies and their biases
I read an extract of (Wimsatt 1980) [1] which includes a list of common biases in reductionist research. I suppose most of us are reductionists most of the time, so these may be worth looking at.
This is not an attack on reductionism! If you think reductionism is too sacred for such treatment, you’ve got a bigger problem than anything on this list.
Here’s Wimsatt’s list, with some additions from the parts of his 2007 book Re-engineering Philosophy for Limited Beings that I can see on Google books. His lists often lack specific examples, so I came up with my own examples and inserted them in [brackets].
Conceptualization
Descriptive Localization: Describing a relational property as if it were monadic, or a lower-order relational property.
Fitness treated as a property of phenotype (or even of genes) rather than as a property of phenotype and environment.
[This may be equivalent to assuming that you can apply linearization to remove variables from a function. You often do this to analyze the stability of equilibriums. So often it’s a useful assumption.]
Meaning Reductionism: Assuming lower-level redescriptions change meanings of scientific terms, while [going back to?] higher-level redescriptions [does] not.
Philosophers (who view themselves as concerned with meaning relations) are inclined to a reductionist bias.
[What he might mean: Modernist theories of meaning begin by studying sentences in isolation, as logic propositions. Take a classic example, “Bachelors are unmarried men,” represented as BAUM = forall(X, B(X)=>U(X),M(X)). They show that there are instances #bob where B(#bob) holds, yet affirming U(#bob) or M(#bob) would seem peculiar, and then say therefore not(true(BAUM)), then generalize to say that true(forall(X, P(X)=>Q(X))) is not meaningful in general. But they don’t consider that we may use the English word “truth” differently when talking about propositions with quantified or unbound variables (BAUM) than when talking about propositions with only bound variables (“Justin Bieber is a bachelor”). But I don’t think this is the main problem with these modernist analyses; compare the similar “birds can fly” / “*penguins can fly” argument.]
Interface Determinism: Assuming that all that counts in analyzing the nature and behavior of a system is what comes or goes across the system-environment interface.
black-box behaviorism: all that matters about a system is how it responds to given inputs
[Systems with hysteresis cannot be made sense of with a timeless black-box analysis.]
black-world perspectivalism: all that matters about the environment is what comes in across the system boundaries and how it responds to system inputs.
[At first, this seemed true to me. But if the environment behaves predictably, and the system studied relies on this predictable behavior, any analysis that doesn’t model the environment will try futily to find mechanisms inside the system that produce the needed information. For instance, if you were studying circadian rhythms, and your model of the environment specified the sky’s brightness at any particular moment, but didn’t model brightness as a 24-hour cycle, you would run into serious difficulty trying to explain the organism’s cyclic behavior entirely in terms of internal components.]
Entificational anchoring: Assume that all descriptions and processes are to be referred to entities at a given level, which is particularly robust, salient, or whatever. This is the ontological equivalent of assuming that there is a single cause for a phenomenon, or single level at which causation can act.
Thus the tendency to regard individual organisms as primary [in selection, presumably].
Cf. methodological individualism for rational decision theorists and other social scientists. [This is a particularly important point to inject into the FAI/CEV discussion of “human values”. The values encoded into the behavior patterns of individual humans by individual selection, the values encoded by kin selection, the goals they develop through interaction with the environment (which are probably not distinguishable, on later inspection of the brain, from “final goals”), the values they hold consciously, and the socially-condoned values of human groups, are all different, and encoded in a variety of representations and levels of abstractions, and often oppose each other. A rational agent is by definition rational only within one representation and with one set of non-contradictory goals. I haven’t seen discussion in the FAI literature of this problem.]
Similarly for genes for some reductionist neo-Darwinians. [Not sure if anybody actually holds such a position.]
Model Building and Theory Construction
Modeling Localization: Look for an intrasystemic mechanism to explain a systemic property, rather than an intersystemic one. Structural properties are regarded as more important than functional ones, and mechanisms as more important than context.
[I don’t know what he means by “functional”.]
[the example above of trying to model circadian rhythms without modeling environmental cycles would also be an example of this bias]
[Chomsky positing that children must have a built-in universal grammar because he didn’t do the math]
[See all of behavior-based robotics and everything written by Rodney Brooks for objections to this bias in artificial intelligence.]
Simplification: Simplify environment before simplifying system. This strategy often legislates higher-level systems out of existence or leaves no way of describing systemic phenomena appropriately.
Generalization: When starting out to improve a simple model of system environment, focus on generalizing or elaborating the internal structure at the cost of ignoring generalizations of elaborations of the external structure.
Corollary: If the model doesn’t work, it must be because of simplifications in description of internal structure, not because of simplified descriptions of extrenal structure.
Observation and Experimental Design
Focused Observation: Reductionists will tend not to monitor environmental variables, and thus will often tend not to record data necessary to detect interactional or larger-scale patterns.
[Nearly every drug toxicity study ever, for failing to sample each subject’s gut microbiome, which is a primary determinant of how ingested drugs are broken down]
Environmental Control: Reductionists will tend to keep environmental variables constant, and will thus often miss dependencies of system variables on them. (“Ceterus paribus” is viewed as a qualifier on environmental variables.)
[Mouse experiments often use sedentary mice in HEPA-filtered cages environments fed ad-libitum. Interventions that extend lifespan in such experiments, such as rapamycin or caloric restriction, may work less well in other environments.]
Locality of Testing: Make sure that a theory works out only locally (or only in the laboratory) rather than testing it in appropriate natural environments, or doing appropriate robustness analyses to suggest what are important environmental variables and/or parameter ranges.
[The Challenger disaster, caused by launching a spaceship in weather too cold for its O-rings to seal rapidly enough]
Abstractive Reification: Observe or model only those things that are common to all cases; don’t record individuating circumstances.
Raff (1996) [3] notes that evolutionary geneticists focus on intraspecific variability, while developmental geneticists focus only on genes that are invariant within the species. This produces problems both of methodology and of focus when trying to relate micro-evolution and macro-evolution or evolution and development.
Cognitive developmental psychologists tend to look only for invariant features in cognition, or major dysfunctions, rather than populational variation.
Articulation-of-Parts (AP) Coherence (Kauffman/Taylor/Schank): Assuming that studies done with parts studied under different conditions are valid when put together to give an explanation of the whole.
[There’s a classic case in cell biology of a chemical that has opposite effects on cells in vitro and in vivo, though I can’t recall now what it is.]
Behavioral Regularity (Schank/Wimsatt): The search for systems whose behavior is relatively regular and controllable will result in selection of systems that may be uncharacteristically stable because they are insensitive to environmental variations.
Schank: Regular 4-day cyclers among Sprague-Dawley rats are insensitive to conspecific pheromones. [This is probably a reference to this article. I think Wimsatt’s point is that biologists chose to study ovulation cycles using a particular strain of rat because it had regular cycles, and it seems that that particular strain of rat had regular cycles because of an inbred genetic deficit in its regulation of ovulation cycles.]
[The initial resistance to chaos theory and nonlinear systems theory was due to linear analysis having done a very good job for centuries on problems that were studied because linear analysis worked on them.]
Functional Localization Fallacies
Deficit Reification: Assuming that the function of a part is to produce whatever the system fails to do when that part is absent, or produced when it is activated or stimulated.
spark plugs as “sputter suppressors”
Assuming 1-1 Mappings Between Parts and Functions:
Stopping the search for functions of a part after finding one; e.g., hemoglobin also functions in NO+ transport
Ignored division of labor when a part’s necessity is shown through deletion studies, thus missing the roles of other parts
[The NCBI stores data on all bacterial genes in a format that assumes each gene has exactly one function [4]]
Ignoring interventive effects and damage due to experimental manipulations
in neurophysiological studies
marking specimens in mark-recapture studies may affect their fitness
Mistaking lower-level functions for higher-level ends, or misidentifying the system that is benefited:
[I think his examples here are mistaken]
eliminative reductionists who want to deny the existence of large domains of cognitive function [long discussion of this here, which I recommend not reading; suffice it to say that I think the ERs being complained of are not misidentifying the system benefited, but just using language a little sloppily]
Imposition of incorrect set of functional categories.
Common in philosophy of psychology when it neglects ethology, ecology, and evolutionary biology.
There are at least two possible corrective measures… The first is robustness analysis—a term and procedure first suggested by Richard Levins (1966) [2]. The second, which I will call “multilevel reductionist analysis,” involves using these heuristics simultaneously at more than one level of organization—a procedure that allows discovery of errors and their correction....It should be clear that these heuristics are mutually supporting, not only in their effective use in structuring and in solving problems, but also in reinforcing, in multiplying, and, above all, in hiding the effects of their respective biases.… Whatever can be said for theories or paradigms as self-confirming entities, as much and perhaps more can be said similarly for [sets of] heuristics.
[1]. William Wimsatt (1980). Reductionist research strategies and their biases in the units of selection controversy. In T. Nickles, ed., Scientific Discovery: Case Studies, Dordrecht: Reidel, p. 213-259.
[2]. R. Levins (1966). The strategy of model building in population biology. American Scientist, 54:421-431.
[3]. Rudolf Raff (1996). The Shape of Life: Genes, Development, and the Evolution of Animal Form. Chicago: U of Chicago Press.
[4]. They let you use multiple GO tags, and put multiple names within a protein’s name field if separated by slashes, but these are not adequate solutions.
This post has gotten 3? downvotes and no comments. If you downvote it, it would help me if you left a comment saying why.
I haven’t downvoted it, but the post looks like a few pages of personal notes with little effort spent to make them palatable or interesting to other people. A tl;dr and some explanation why anyone should care could help.
Everyone should care because the biases that are “close to home” are ones that matter. This is an important subject.
Normative versus descriptive. Saying “everyone should care” doesn’t change the fact that some don’t, and that for those people, a tidier presentation may help, even if it wouldn’t make a difference for an ideal rationalist.
Regarding the subject as important is not at all exclusive of wanting a better presentation.
I upvoted, but the tone of this post is terse which makes it fairly difficult to understand. Some of the examples are confusing. It’s not very readable for people who haven’t already been exposed to these ideas, you may be assuming too much background knowledge that reading the book gave you.
I didn’t moderate it, but this post looks pretty close to a Gish Gallop.
I don’t see this as a Gish Gallop, as it doesn’t even appear to me to be an argument. It just looks like a list of biases that reductionists should take extra care to avoid. The “should” part wasn’t argued, just assumed.
“Reductionists should avoid these biases” implies that reductionists have those biases to a significant degree, and that when examples are given they are examples of these biases. This post contains at least 33 separate items implying that reductionists are often biased in some particular way, plus all the specific examples that are brought up. Nobody could possibly answer them all.
Why would you “answer” them? This is not a “reductionism is bad” argument, and I would find it oddly religious if you felt the need to insist that reductionism was unique among all methodologies in not imposing a bias.
“This is not a “reductionism is bad” argument”
Conversational implicature suggests that when you give a list of 33 ways in which reductionists can be biased, you are claiming that reductionists are exceptionally biased. It is logically possible that you are merely saying they are biased like everyone else, but actual human communication doesn’t work that way.
I don’t really get that feeling. But if some people do maybe it would make sense for Phil to add a clarifying remark that that’s not intended.
A Gish Gallop is presenting a lot of not-very-good points and then drawing a conclusion, so that you ignore people who disagree with your conclusion if they missed any of your points. This is not drawing a conclusion, and I think the points are individually interesting.
The book I wrote about a month or two ago, Real Presences—now that was a Gish Gallop.
I was going to downvote your comment, but then I realized you gave a useful answer to a question I asked, so that would be ingrateful of me, and I will say “thanks!” instead. I guess people are interpreting this as an attack on reductionism.
(Would it make sense to say “thanks and a downvote” when you’re grateful for a response that you think is wrong? That is, should the votes represent gratitude, assessment of usefulness in the larger context, or accuracy of claims made in the comment?)
A wrong response documented is worth the implicit benefit of the response being addressed in the minds of all who would object with that response’s reasoning.
And I think you meant to say you read Real Presences, not wrote it. :P
Ah. “I wrote about a month ago” = “I a month ago”, not “I wrote ”.
A wrong response is worth something, but I wouldn’t want to vote it up, since that would be read as agreement.
Would downvoting imply disagreement, then?
I think an upvote suggests agreement with the content rather than gratefulness for it. If someone has a wrong opinion, but people are interested in why, and he explains it, and they all upvote it out of gratitude, he might interpret that as agreement.
If a downvote implies something other than the opposite of what an upvote implies, it becomes difficult to interpret votes.
Is it worth reading if we liked these notes? Does he elaborate much on specifics about these ideas? Or do these notes sum up everything fairly thoroughly?
Is this actually true, though? I’m inclined to think that your black-box analysis was badly done if it can’t account for hysteresis. Time is only relevant to the system insofar as time describes the rate at which various parts of the system do things, which makes it seem like it’s something that can be accounted for in indirect terms. Using indirect terms might be less efficient, but I don’t see any reason to believe it’s impossible.
I disagree that black box thinking is something we should strive to avoid. It seems too useful to me, and rather unavoidable anyways. Of course any particular black box model may be an oversimplification, but models that don’t try to aim themselves at simplicity are going to fail due to Occam’s Razor. (Also, in what sense is black box thinking a reductionist bias? It seems like the quintessential holistic bias to me.)
If the system’s state is a function of its environment and of its past, because some internal component of the system is “remembering” the past, then a timeless input/output analysis can’t predict the system’s output from its input.
I didn’t say it was. Nor did the author. Every approach has biases.
As to whether the article is worthwhile—well, it’s hard to get a hold of, and most of it is focused on questions of evolutionary theory. If it interests you, you’d probably find it easier and more useful to get the book. You can sample it thru the link in the post.
Reference 2 is available via google scholar, not to mention jstor. The principal reference is paywalled both here and here, but is available as a book on libgen, as is the third.
What does “reductionism” mean here? “using models”?
The title is very strange, because it seems to be suggest that the biases are as compared to some other paradigm, but the criticisms are all internal, that the models are too simple.
A bias is relative to reality. The function f(x)+epsilon is an accurate but biased estimator of f(x).
While this is somewhat different from what I take the author to mean, this reminded me of the two possible mental models of the self-environment relationship that Kevin Simler pictures under the “inhabitance” heading of Ethology and Personal Identity.
Example here
I’m not sure what you mean. Do you mean that there are conditions under which being sexually attracted to members of the same sex is evolutionarily advantageous?
Or do you mean that the genetic trait that manifests as homosexuality, manifests as another, advantageous, trait under some circumstances? If so, this seems like a version of Darreact’s “imprinting” theory.
Not disadvantageous just means not disadvantageous.
This is something to keep in mind while constructing your world-eating inductive agents, folks.