This looks not like rationality, but like one of David Stove’s examples of thought gone wrong. And like his examples, the context is just more of the same.
Can you explain what you see in Boyd’s words?
ETA: I’ve since googled to see who John Boyd) was, and he was a notable military strategist credited with fundamental improvements to fighter aircraft design. That tweaked my interest up enough to read some more of his work, but I am still unable to see anything in it.
I agree. Many of the sentences in this essay have something horribly wrong with the thought process behind them, but I can’t even begin to describe what it is. There is a similar problem with the Umberto Eco quote below.
The above quote begins with this:
According to Heisenberg and the Second Law of Thermodynamics any attempt to do so in the real world will expose uncertainty and generate disorder. Taken together, these three notions support the idea that any inward-oriented and continued effort to improve the match-up of concept with observed reality will only increase the degree of mismatch
I am pretty sure that Boyd is badly mangling Heisenberg, the Second Law of Thermodynamics, and Godel’s Incompleteness Theorem, or his extrapolations are way off.
In fact, someone mentioning Heisenberg’s Uncertainty Principle, the Second Law of Thermodynamics, and Godel’s Incompleteness Theorem all in one place, triggers my Bayesian epistemic spam filter.
Some people’s minds seem to be infected with a strange solipsistic and skeptical epistemic disease where they think that trying to understand reality is futile, and will lead to either increasing mismatch of the map to the territory (in the case of this quote), or some other “terrible” results (in the case of the Eco quote).
How the hell do people come with ideas like these??
trying to understand reality is futile, and will lead to either increasing mismatch of the map to the territory
How the hell do people come with ideas like these??
That’s actually a position of reasonable people who engage in non-greedy reductionism, mostly replying to greedy reductionists (to use Dennett’s terminology).
To give an example, suppose you’re trying to get better at playing chess on a chess program running on a computer. Further suppose that the computer you’re using is a Turing machine being implemented in Conway’s Game of Life. Does understanding the behavior of a turing machine, or gliders and spaceships, or the basic rules of the Game of Life, increase your understanding of how to get better at chess? Will focusing on such things make you better or worse at playing chess?
That said, I agree with you about the above quote.
That’s actually a position of reasonable people who engage in non-greedy reductionism, mostly replying to greedy reductionists (to use Dennett’s terminology).
Trying to understand reality is futile is a narrow and trivial sense: the map will never completely match the territory. That’s not the notion I’m criticizing.
In the case of Boyd, when he says “any inward-oriented and continued effort to improve the match-up of concept with observed reality will only increase the degree of mismatch,” he seems to imply that the harder we work to create a model of observed reality with our concepts, the worse the match will be. That’s a truly weird notion.
Maybe his quote goes from being an example of thinking gone horribly wrong, to thinking gone horribly explained, if we try to figure out what he means by “inward-oriented.”
When this orderly (and pleasant) state is reached the concept becomes a coherent pattern of ideas and interactions that can be used to describe some aspect of observed reality. As a consequence, there is little, or no, further appeal to alternative ideas and interactions in an effort to either expand, complete, or modify the concept.(19) Instead, the effort is turned inward towards fine tuning the ideas and interactions in order to improve generality and produce a more precise match of the conceptual pattern with reality. (19) Toward this end, the concept—and its internal workings—is tested and compared against observed phenomena over and over again in many different and subtle ways.(19) Such a repeated and inward-oriented effort to explain increasingly more subtle aspects of reality suggests the disturbing idea that perhaps, at some point, ambiguities, uncertainties, anomalies, or apparent inconsistencies may emerge to stifle a more general and precise match-up of concept with observed reality.(19) Why do we suspect this?
If we are charitable and creative, perhaps Boyd means something like this: “given a bad theory, additional ad hoc modifications increase the mismatch between the theory and observed reality.” Though I don’t think that’s true either: ad hoc modifications of a bad theory don’t “increase” its mismatch with observation. Rather, they stretch the theory until it does match the observations, making the theory more strained.
The key word here is “inward-oriented;” that is, based on internal logic, instead of on new evidence. When previous theories are destroyed by the mismatch with reality, the facts that supported the previous theory are either revealed as untrue, or merged into a newer and more correct theory, that incorporates new evidence and different links between the facts to come to a different, and presumably superior, conclusion.
On second though, that was a bad section to quote, although Boyd never really gave any better ones in his essay. I tried to note the way out without throwing on too much of Boyd’s pointless terminology in the last sentence (“Fortunately, there is away out.”) I clearly failed; my bad.
This looks not like rationality, but like one of David Stove’s examples of thought gone wrong. And like his examples, the context is just more of the same.
Can you explain what you see in Boyd’s words?
ETA: I’ve since googled to see who John Boyd) was, and he was a notable military strategist credited with fundamental improvements to fighter aircraft design. That tweaked my interest up enough to read some more of his work, but I am still unable to see anything in it.
I agree. Many of the sentences in this essay have something horribly wrong with the thought process behind them, but I can’t even begin to describe what it is. There is a similar problem with the Umberto Eco quote below.
The above quote begins with this:
I am pretty sure that Boyd is badly mangling Heisenberg, the Second Law of Thermodynamics, and Godel’s Incompleteness Theorem, or his extrapolations are way off.
In fact, someone mentioning Heisenberg’s Uncertainty Principle, the Second Law of Thermodynamics, and Godel’s Incompleteness Theorem all in one place, triggers my Bayesian epistemic spam filter.
Some people’s minds seem to be infected with a strange solipsistic and skeptical epistemic disease where they think that trying to understand reality is futile, and will lead to either increasing mismatch of the map to the territory (in the case of this quote), or some other “terrible” results (in the case of the Eco quote).
How the hell do people come with ideas like these??
That’s actually a position of reasonable people who engage in non-greedy reductionism, mostly replying to greedy reductionists (to use Dennett’s terminology).
To give an example, suppose you’re trying to get better at playing chess on a chess program running on a computer. Further suppose that the computer you’re using is a Turing machine being implemented in Conway’s Game of Life. Does understanding the behavior of a turing machine, or gliders and spaceships, or the basic rules of the Game of Life, increase your understanding of how to get better at chess? Will focusing on such things make you better or worse at playing chess?
That said, I agree with you about the above quote.
Trying to understand reality is futile is a narrow and trivial sense: the map will never completely match the territory. That’s not the notion I’m criticizing.
In the case of Boyd, when he says “any inward-oriented and continued effort to improve the match-up of concept with observed reality will only increase the degree of mismatch,” he seems to imply that the harder we work to create a model of observed reality with our concepts, the worse the match will be. That’s a truly weird notion.
Maybe his quote goes from being an example of thinking gone horribly wrong, to thinking gone horribly explained, if we try to figure out what he means by “inward-oriented.”
If we are charitable and creative, perhaps Boyd means something like this: “given a bad theory, additional ad hoc modifications increase the mismatch between the theory and observed reality.” Though I don’t think that’s true either: ad hoc modifications of a bad theory don’t “increase” its mismatch with observation. Rather, they stretch the theory until it does match the observations, making the theory more strained.
A much better framework for discussing matches of theory with observation than Boyd’s 9th grade philosophy paper is Imre Lakatos’ work on “progressive” vs. “degenerating” research programs.
The key word here is “inward-oriented;” that is, based on internal logic, instead of on new evidence. When previous theories are destroyed by the mismatch with reality, the facts that supported the previous theory are either revealed as untrue, or merged into a newer and more correct theory, that incorporates new evidence and different links between the facts to come to a different, and presumably superior, conclusion.
On second though, that was a bad section to quote, although Boyd never really gave any better ones in his essay. I tried to note the way out without throwing on too much of Boyd’s pointless terminology in the last sentence (“Fortunately, there is away out.”) I clearly failed; my bad.