Where Philosophy Meets Science
Looking back on early quantum physics—not for purposes of admonishing the major figures, or to claim that we could have done better if we’d been born into that era, but in order to try and learn a moral, and do better next time—looking back on the dark ages of quantum physics, I say, I would nominate as the “most basic” error…
… not that they tried to reverse course on the last three thousand years of science suggesting that mind was complex within physics rather than fundamental in physics. This is Science, and we do have revolutions here. Every now and then you’ve got to reverse a trend. The future is always absurd and never unlawful.
I would nominate, as the basic error not to repeat next time, that the early scientists forgot that they themselves were made out of particles.
I mean, I’m sure that most of them knew it in theory.
And yet they didn’t notice that putting a sensor to detect a passing electron, or even knowing about the electron’s history, was an example of “particles in different places.” So they didn’t notice that a quantum theory of distinct configurations already explained the experimental result, without any need to invoke consciousness.
In the ancestral environment, humans were often faced with the adaptively relevant task of predicting other humans. For which purpose you thought of your fellow humans as having thoughts, knowing things and feeling things, rather than thinking of them as being made up of particles. In fact, many hunter-gatherer tribes may not even have known that particles existed. It’s much more intuitive—it feels simpler—to think about someone “knowing” something, than to think about their brain’s particles occupying a different state. It’s easier to phrase your explanations in terms of what people know; it feels more natural; it leaps more readily to mind.
Just as, once upon a time, it was easier to imagine Thor throwing lightning bolts, than to imagine Maxwell’s Equations—even though Maxwell’s Equations can be described by a computer program vastly smaller than the program for an intelligent agent like Thor.
So the ancient physicists found it natural to think, “I know where the photon was… what difference could that make?” Not, “My brain’s particles’ current state correlates to the photon’s history… what difference could that make?”
And, similarly, because it felt easy and intuitive to model reality in terms of people knowing things, and the decomposition of knowing into brain states did not leap so readily to mind, it seemed like a simple theory to say that a configuration could have amplitude only “if you didn’t know better.”
To turn the dualistic quantum hypothesis into a formal theory—one that could be written out as a computer program, without human scientists deciding when an “observation” occurred—you would have to specify what it meant for an “observer” to “know” something, in terms your computer program could compute.
So is your theory of fundamental physics going to examine all the particles in a human brain, and decide when those particles “know” something, in order to compute the motions of particles? But then how do you compute the motion of the particles in the brain itself? Wouldn’t there be a potential infinite recursion?
But so long as the terms of the theory were being processed by human scientists, they just knew when an “observation” had occurred. You said an “observation” occurred whenever it had to occur in order for the experimental predictions to come out right—a subtle form of constant tweaking.
(Remember, the basics of quantum theory were formulated before Alan Turing said anything about Turing machines, and way before the concept of computation was popularly known. The distinction between an effective formal theory, and one that required human interpretation, was not as clear then as now. Easy to pinpoint the problems in hindsight; you shouldn’t learn the lesson that problems are usually this obvious in foresight.)
Looking back, it may seem like one meta-lesson to learn from history, is that philosophy really matters in science—it’s not just some adjunct of a separate academic field.
After all, the early quantum scientists were doing all the right experiments. It was their interpretation that was off. And the problems of interpretation were not the result of their getting the statistics wrong.
Looking back, it seems like the errors they made were errors in the kind of thinking that we would describe as, well, “philosophical.”
When we look back and ask, “How could the early quantum scientists have done better, even in principle?” it seems that the insights they needed were philosophical ones.
And yet it wasn’t professional philosophers who swooped in and solved the problem and cleared up the mystery and made everything normal again. It was, well, physicists.
Arguably, Leibniz was at least as foresightful about quantum physics, as Democritus was once thought to have been foresightful about atoms. But that is hindsight. It’s the result of looking at the solution, and thinking back, and saying, “Hey, Leibniz said something like that.”
Even where one philosopher gets it right in advance, it’s usually science that ends up telling us which philosopher is right—not the prior consensus of the philosophical community.
I think this has something fundamental to say about the nature of philosophy, and the interface between philosophy and science.
It was once said that every science begins as philosophy, but then grows up and leaves the philosophical womb, so that at any given time, “Philosophy” is what we haven’t turned into science yet.
I suggest that when we look at the history of quantum physics and say, “The insights they needed were philosophical insights,” what we are really seeing is that the insight they needed was of a form that is not yet taught in standardized academic classes, and not yet reduced to calculation.
Once upon a time, the notion of the scientific method—updating beliefs based on experimental evidence—was a philosophical notion. But it was not championed by professional philosophers. It was the real-world power of science that showed that scientific epistemology was good epistemology, not a prior consensus of philosophers.
Today, this philosophy of belief-updating is beginning to be reduced to calculation—statistics, Bayesian probability theory.
But back in Galileo’s era, it was solely vague verbal arguments that said you should try to produce numerical predictions of experimental results, rather than consulting the Bible or Aristotle.
At the frontier of science, and especially at the frontier of scientific chaos and scientific confusion, you find problems of thinking that are not taught in academic courses, and that have not been reduced to calculation. And this will seem like a domain of philosophy; it will seem that you must do philosophical thinking in order to sort out the confusion. But when history looks back, I’m afraid, it is usually not a professional philosopher who wins all the marbles—because it takes intimate involvement with the scientific domain in order to do the philosophical thinking. Even if, afterward, it all seems knowable a priori; and even if, afterward, some philosopher out there actually got it a priori; even so, it takes intimate involvement to see it in practice, and experimental results to tell the world which philosopher won.
I suggest that, like ethics, philosophy really is important, but it is only practiced effectively from within a science. Trying to do the philosophy of a frontier science, as a separate academic profession, is as much a mistake as trying to have separate ethicists. You end up with ethicists who speak mainly to other ethicists, and philosophers who speak mainly to other philosophers.
This is not to say that there is no place for professional philosophers in the world. Some problems are so chaotic that there is no established place for them at all in the halls of science. But those “professional philosophers” would be very, very wise to learn every scrap of relevant-seeming science that they can possibly get their hands on. They should not be surprised at the prospect that experiment, and not debate, will finally settle the argument. They should not flinch from running their own experiments, if they can possibly think of any.
That, I think, is the lesson of history.
- Thou Art Physics by 6 Jun 2008 6:37 UTC; 151 points) (
- Where Physics Meets Experience by 25 Apr 2008 4:58 UTC; 73 points) (
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 73 points) (
- Can You Prove Two Particles Are Identical? by 14 Apr 2008 7:06 UTC; 63 points) (
- Bell’s Theorem: No EPR “Reality” by 4 May 2008 4:44 UTC; 40 points) (
- The So-Called Heisenberg Uncertainty Principle by 23 Apr 2008 6:36 UTC; 40 points) (
- And the Winner is… Many-Worlds! by 12 Jun 2008 6:05 UTC; 28 points) (
- Which Basis Is More Fundamental? by 24 Apr 2008 4:17 UTC; 28 points) (
- Quantum Mechanics and Personal Identity by 12 Jun 2008 7:13 UTC; 14 points) (
- Quantum Physics Revealed As Non-Mysterious by 12 Jun 2008 5:20 UTC; 13 points) (
- 2 Sep 2013 9:19 UTC; 10 points) 's comment on How can we ensure that a Friendly AI team will be sane enough? by (
- 31 Aug 2013 8:00 UTC; 8 points) 's comment on How can we ensure that a Friendly AI team will be sane enough? by (
- [SEQ RERUN] Where Philosophy Meets Science by 2 Apr 2012 3:50 UTC; 8 points) (
- Rationality Reading Group: Part S: Quantum Physics and Many Worlds by 28 Jan 2016 1:18 UTC; 5 points) (
- 2 Jul 2013 7:56 UTC; 4 points) 's comment on Newcomb’s Problem and Regret of Rationality by (
- 28 Aug 2008 4:01 UTC; 1 point) 's comment on Against Modal Logics by (
- 15 Apr 2008 21:19 UTC; 0 points) 's comment on The Quantum Arena by (
- 30 May 2011 7:19 UTC; -2 points) 's comment on The Level Above Mine by (
“Just as, once upon a time, it was easier to imagine Thor throwing lightning bolts, than to imagine Maxwell’s Equations—even though Maxwell’s Equations can be described by a computer program vastly smaller than the program for an intelligent agent like Thor.”
Hmmm… perhaps we could define a “relative Kolmogorov complexity”, K[X], where the K[X] of Y is the size of the smallest .diff file that alters program X to make it output Y and then halt. K(Maxwell) < K(Thor), but it seems quite likely that KHuman > KHuman.
To reply years after the original post...
There is a relative notion of Kolmogorov complexity. Roughly K(X| Y) is the size of the smallest program that outputs X given Y as input. I agree that K(Thor | Human) << K(Thor), but I believe that this is still much larger than K(Maxwell|Human). This is because K(Maxwell|Human) ⇐ K(Maxwell), which is really small. On the other hand to specify Thor given a description of a human, you get huge savings on describing how a humanoid works, but still have the task of describing what lightning actually is (as well as the task of describing how Thor’s thoughts translate into lightning).
Sometime last year, I got involved in studying foundations of quantum mechanics. Like many people before me, I rediscovered decoherence. (In my case, the context was a heavy atom interacting with Bose-Einstein Condensate.)
After I discussed my work with one of our resident experts in the topic, he pointed out to me that David Bohm had made the same argument (in words, not mathematically) in the early 1950′s. In fact, the idea had even been present before that, though Bohm’s explanation is the best of the early ones. He postulated the following explanation why the Copenhagen interpretation became the dominant one: the Copenhagen crowd had more Ph.D. students, and network effects (Copenhagen people becoming editors at PRL, for instance) pushed a nonsensical theory into the mainstream.
Great post. None of my past criticisms of overstating certainty apply (not with statements like “In fact, many hunter-gatherer tribes may not even have known that particles existed”). Nor do I see much dialectic-seeking. In my opinion, it’s an Eliezer post with only the good stuff in it, and better than average good stuff, too.
Chris, could you recommend an introduction to decoherence for a grad student in physics? I am dumbstruck by how difficult it is to learn about it and the seeming lack of an authoritative consensus. Is there a proper review article? Is full-on decoherence taught in any physics grad classes, anywhere?
Eliezer: Minor nitpick: I think it’s the 47th anniversary, not 50th
suggest that, like ethics,
Link to SIAI blog here broken (404 Not Found).
Komponisto and Psy-Kosh, fixed.
“Once upon a time, the notion of the scientific method—updating beliefs based on experimental evidence—was a philosophical notion.”
“But back in Galileo’s era, it was solely vague verbal arguments that said you should try to produce numerical predictions of experimental results, rather than consulting the Bible or Aristotle.”
As far as I know, the first hints of the scientific method (testing theories by experiment) appear in the writings of Roger Bacon (who liven a few hundred years before the Francis Bacon people seem to confuse him with). He argued that when interpreting Aristotle, instead of arguing what the modern translation of an obscure Greek word was, you should use experiments to try out the alternative meanings. Experiment was conceived as a method of consulting Aristotle.
Humble beginnings.
It’s an error to call “vague verbal arguments” philosophy. Vague verbal arguments are just that whereas philosophy is a clearly delineated academic discipline (it’s actually easier to file works of philosophy on a single shelf than scientific treatise; even a modern work of philosophy is at most one or two degrees of separation from Aristotle whereas a modern physics paper is many citations removed from Galileo). We can make vague verbal arguments without doing philosophy or committing ourselves to answer to the philosophers’ objections.
It’s an historical error to suggest Galileo was following even a vague verbal argument though. Galileo began within the Aristotelian tradition (not as a philosopher; Aristotle was used by practically-oriented people working on mechanical problems at that time) and came to reject it completely through the process of getting his mechanics to work. Galileo was famously not a systematist and was derided then (and now) as a terrible philosopher. He hated the philosophers in turn. See Drake’s Galileo at Work for a comprehensive overview of Galileo’s development as a scientist.
So no, I don’t think philosophy is important, even within science. What scientists do is chat informally about things, have insights, think up experiments and solve problems. This is not philosophy; it’s just people exercising their mental faculties. It’s no more philosophy than cooking or carpentry is philosophy. I’ve worked in a lab and I’ve studied philosophy and there’s no overlap in style or method. It would be precisely as accurate to describe the “vague verbal arguments” of scientists as theology as it would philosophy.
So how does one avoid this basic error while formulating a good theory of nature ?
Optimistically, and very speculatively, I would like a good theory to, at least, formally suggest how a bunch of particles (or whatever concept we replace them with in the future) can come up with a good theory of themselves in the first place.
Or why not make this the starting ansatz such that one builds upon this very requirement in a way similar to how one builds a quantum field in a manifestly covariant way. Since infinite recursion seems to get in the way, maybe these good theories should incorporate a fundamental “unit of approximation” related to the maximal recursion depth or complexity cutoff.
Excuse my rambling.
Jess: I can give you the standard references (which you’ve probably already seen), but they are mostly useless. This is a really weird field to work in, I’d strongly recommend against making a career of it. Tough to find jobs for rather stupid political reasons.
The only really useful work is a paper about measurement by David Bohm from the 50′s (don’t have it with me). He describes decoherence/measurement in words, and his explanation makes sense. It’s good to get an intuitive picture, but not for much else.
Apart from that, all I can suggest is that you build a toy model of a quantum system X observation device and solve it. That will explain far more than any paper I’ve ever read.
Eliezer, your advice to philosophers is similar to Paul Graham’s. http://www.paulgraham.com/philosophy.html I really like what both of you have to say about philosophy.
It’s typical, in the European tradition, to credit Francis Bacon with inventing Science. However, Bacon was explicitly cribbing from a man who lived centuries earlier in Egypt and Syria, who actually originated the ideas and methods, and who applied them in an enduring work. The man was Al-Haytham, and the work was his book on optics. He was known at the time in Europe as Alhazen or Alhacen.
Make no mistake: Al-Haytham was fully aware of the importance of his ideas. Bacon deserves credit for also recognizing that importance, and for popularizing the ideas among the notoriously insular English. He does not deserve credit for originating anything so profound.
I should note for completeness that al-Haytham also lived centuries before Roger Bacon. It’s not clear if Roger cribbed, also, but his exhortations to experiment described nothing like the complete system for establishing quantifiable truth found in the Optics.
Someone (Russell?) once commented on the surprising efficacy of mathematics, which was developed by people who did not believe that it would ever serve any purpose, and yet ended up being at the core of many pragmatic solutions.
A companion observation is on the surprising inefficacy of philosophy, which is intended to solve our greatest problems, and never does. Like Eliezer, my impression is that philosophy just generates a bunch of hypotheses, with no way of choosing between them, until the right hypotheses is eventually isolated by scientists. Philosophy is usually an attempt to do science without all the hard work. One might call philosophy the “science of untestable hypotheses”.
But, on the other hand, there must be cases where philosophical inclinations have influenced people to pursue lines of research that solved some problem sooner than it would have been solved without the initial philosophical inclination.
One example is the initial conception that the Universe could be described mathematically. Kepler and Newton worked so hard at finding mathematical equations to govern the movements of celestial bodies because they believed that God must have designed a Universe according to some order. If they’d been atheists, they might never have done so.
This example doesn’t redeem philosophy, because I believe their philosophies were helpful only by chance. I’d like to see how many examples there are of philosophical notions that sped up research that proved them correct. Can anyone think of some?
gaaahhh. I stop reading for a few days, and on return, find this...
Eliezer, what do these distinctions even mean? I know philosophers who do scary bayesian things, whose work looks a lot—a lot—like math. I know scientists who make vague verbal arguments. I know scientists who work on the “theory” side whose work is barely informed by experiments at all, I know philosophers who are trying to do experiments. It seems like your real distinction is between a priori and a posteriori, and you’ve just flung “philosophy” into the former and “science” into the latter, basically at random.
(I defy you to find an experimental test for Bayes Rule, incidentally—or to utter some non-question-begging statistical principle by which the results could be evaluated.)
Eliezer Yudkowsky, I enjoyed your blog post very much.
You wrote, “It was once said that every science begins as philosophy, but then grows up and leaves the philosophical womb, so that at any given time, ‘Philosophy’ is what we haven’t turned into science yet.” Who originally said that? I’m not sure I agree, but it’s an interesting view of philosophy.
Anyway, I especially like when you say that philosophers would benefit from learning as much of the science related to their field of interest. The science, experience and information is that with which the philosophers philosophize. So they can philosophize even more usefully if they have more knowledge of relevant science.
“After all, the early quantum scientists were doing all the right experiments. It was their interpretation that was off. And the problems of interpretation were not the result of their getting the statistics wrong.”
The Accidental Theorist, by Paul Krugman:
http://web.mit.edu/krugman/www/hotdog.html
I don’t think this idea gets the recognition it deserves.
For a nice example of a philosopher who isn’t afraid to conduct their own experiments, see the work of Sarah Jane Leslie of Princeton. Really interesting stuff on the truth conditions (or lack thereof) of generics (e.g. “ducks lay eggs”, “mosquitoes carry the West Nile Virus”).
WHERE PHILOSOPHY(rational instrumentalism) MEETS SCIENCE (physical instrumentalism)?
Philosophy and Science are identical processes until we attempt to use one of them without the other.
That point of demarcation is determined by the limits beyond which we cannot construct either (a) logical, or (b) physical, instruments with which to eliminate error, bias, wishful thinking, suggestion, loading and framing, obscurantism, propaganda, pseudorationalism, pseudoscience, and outright deceit.