What Science got Wrong and Why
An article at The Edge has scientific experts in various fields give their favorite examples of theories that were wrong in their fields. Most relevantly to Less Wrong, many of those scientists discuss what their disciplines did that was wrong which resulted in the misconceptions. For example, Irene Pepperberg not surprisingly discusses the failure for scientists to appreciate avian intelligence. She emphasizes that this failure resulted from a combination of different factors, including the lack of appreciation that high level cognition could occur without the mammalian cortex, and that many early studies used pigeons which just aren’t that bright.
Some of these answers are definitely a bit confused, e.g. Sheldrake’s. And Matthew Ritchie seems to be talking about an entirely different topic...
Yes, they aren’t all as good as Pepperberg’s. Geoffrey Carr is also clearly talking about issues well outside his expertise. The history of spontaneous generation is much more complicated than he describes it. The narrative of everything hinging on Pasteur is often repeated but historically inaccurate. In the case of Ritchie I think he’s trying to say that the incorrect idea was that of art historians believing that Einstein and Minkowski influenced cubism?
Yeah; I’d actually call that a pretty interesting mistake, especially because it’s so ridiculous—I don’t see how it could happen unless you basically just ignored the actual art and the history in favor of some half-baked notion of “the fourth dimension is time”. While I can’t claim to have ever paid attention to art history, a mistake like that makes me wonder just how much actual history art historians are doing. Unfortunately while Ritchie understands this is a mistake, he doesn’t seem to have worked through the confusion to the point of being able to present it in a way that’s really correct...
I guess it’s not a different topic after all, as I originally said, just the same topic applied to a different discipline. I thought it was a different topic because he writes it as if he were going off on a tangent.
I’m not sure that it is that large a mistake. It seems that the mistake is that they weren’t influenced by time as a fourth dimension but by the pre-Einsteinian idea of more than 3 spatial dimensions. If that’s what Ritchie is saying then the problem might have been subtle to someone who didn’t know much math or physics. I don’t think that understanding this is helped by Ritchie’s writing style.
George Lakoff attacks “the claims of enlightenment reason” and presents “the realities”. At first I though he was attacking rationality, but some translation shows the opposite. Heavy quoting of his blurb, ahead.
“enlightenment reason, which claims that if you just tell people the facts about their interests, they will reason to the right conclusion, since reason is supposed to universal, logical, and based on self-interest” In some sense, what he calls “reason” we call biases.
“Claim: Reason can fit the world directly.”—Here we say: “the map is not the territory”
“Claim: Thought is conscious. But neuroscience shows that is about 98 percent unconscious.”—We might say the brain uses heuristics.
“Claim: Language is neutral, and can fit the world directly.” - Debunking this is a key part of Lakoff’s research. He argues that the structure of language shapes our thoughts. I can’t immediately think of a directly comparable line of thought on Less Wrong. http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ seems a little different, but maybe someone else can see the connection more easily.
Also from Lakoff:
Gee, I’ll bet he has a clever explanation of the 2008 election too.
And I suspect in 2012, no matter what happens he’ll have a very good explanation. And 2014, etc. etc. You think he might need to read the sequences just a bit?
At this point I suspect he’s applying the Litany of Upton Sinclair.
It’s a living.
I don’t know exactly what point Lakoff is trying to make, but there is an anti-Sequences point that I have been tempted to make from time to time. And Lakoff’s words seem to be a pretty good jumping-off place.
I would say not only “The map is not the territory”. I would say (with Lakoff?) that it is impossible to even speak about the territory with any precision. Language just doesn’t work for that.
The foundation of Bayesian realism is the dogma that there is only one territory, though there can be many maps (one per map-maker). However, it is possible by use of language for two rational minds to agree on a map. Platonism in mathematics is the canonical example. Everyone agrees on what the standard model of arithmetic looks like. And there is agreement on standard ‘models’ of set theory as well; the only controversy deals with which model ought to be considered ‘standard’. But, outside mathematics, although it is possible to reach agreement on what any particular map says, there is no way to reach agreement on which map best corresponds to the territory.
Paradoxically, it is the subjective ‘maps’ - things that exist only in people’s heads—that are the cold, hard, clear-cut entities which can be studied using the mathematical tools of rationality. But it is the objective ‘territories’ which remain unknowable, controversial, and in some sense unspeakable.
There was a post a long time ago from Eliezer that I cannot find (edit: thank you Plasmon!) with a quick search of the site, where he had listed a set of characteristics (“blue”, “is egg-shaped”) and in the center a label for those characteristics (“blegg!”); and two graphs. One graph, which is the native description in most minds, has a node at the center, the label (blegg!) and nodes coming out (something is blue iff it is a bleg iff it is egg-shaped) and the other graph which had no label (something could be blue or egg-shaped; if both it might be a blegg!)
It seems to me that you conceive of maps as being the structural units of the universe. That is, there are a bunch of “map” nodes and no central territory node.
I have, in the past, felt a great sympathy for this idea. But I no longer subscribe to it.
There is one way in which this conception is simpler: it contains fewer nodes! One for each mapmaker rather than all those, PLUS one for the territory. Also, it has the satisfying deep wisdom relation that Louie discusses in his first point.
There are several ways in which it is less simple. It has FAR more connections; around n! rather than n. Even if not all mapmakers interact it has around n choose (i) where i is the average number of interactions. That’s WAY MORE.
Also, it requires that we have a strong sense of who the mapmakers are; that is, we have to draw a little circle around where all the maps are. This seems like a very odd, very complicated, not very materialist proposition which has all the same flaws that the copenhagen interpretation does.
Boy, did I fail to communicate! No, that is not how I conceive of maps. “Structural units of the universe” sounds more like territory to me. You and I seem to have completely diverging understandings of what those neural net diagrams were about as well.
I think of maps as being things like Newton’s theory of Gravitation, QED, billiard-ball models of kinetic theory, and the approximation of the US economy as a free market. Einstein’s theory of gravitation is a better map than Newton’s, MWI and CI are competing maps of the same territory.
Your final paragraph signals to me that we are not likely to succeed in communicating. I am not even thinking of a materialistic conception of maps being embedded in brains—as a part of the territory somehow representing the territory. I am perfectly happy maintaining a Cartesian dualism—thinking of maps as living in minds and of territory as composed of a different kind of substance altogether.
Your final paragraph establishes its point well; I agree we will not end up seeing eye-to-eye on this matter. However out of curiosity, I would ask if you can tell me how you go about finding out where mind-stuff is?
Is it in every human brain? When is it put there? Is it in a monkey brain? A chimpanzee brain? An octopus brain? Would it be in an em computation? Do you believe in p-zombies?
It occurs to me that this is probably coming off as more hostile than I intend. I used to have a sense of dualism, but the fact that there are questions about it I do not know how to answer turned me off. I am curious whether you answered these questions or ignored them, not as a matter of criticism.
I am really badly failing to communicate today. My fault, not yours. No, I am not asserting Cartesian dualism as a theory about the true nature of reality. I am a monist, a materialist. And in a sense, a reductionist. But not a naive one who thinks that high-level concepts should be discarded in favor of low level ones as soon as possible because they are closer to the ‘truth’.
Yes, those were scare quotes around the word ‘truth’. But the reason I scare-quote the word is not that I deny that truth exists. Of course the word has meaning. It is just that neither I nor anyone else can provide and justify any operational definition. We don’t know the truth. We can’t perceive the territory. We can only construct maps, talk about the maps we have created with other people, and evaluate the maps against the sense impressions that arrive at our minds.
Now all of this takes place in our minds. Minds, not brains. We need to pretend to a belief in dualism in order to even properly think the thought that the map is not the territory. Cartesian dualism is not a mistake. Any more than Newtonian physics is a mistake. When used correctly it enables you to understand what is happening.
No doubt this will have been another failure to communicate. Maybe I’ll try again someday.
Okay this is much better and different from what I’d thought you’d been saying.
When you say “we” and “minds” you are getting at something and here is my attempt to see if I’ve understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it’s I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
Is this similar to what you mean?
No. If it involves self modeling, it is very far from what I am talking about. Give it up. It is just not worth it.
Okay. Sorry ):
The post you’re talking about is probably How An Algorithm Feels From Inside
Yes it is, thank you! I’ll add the link in.
Hm. I dunno, I think that talking about your map can, statistically at least, be identical to talking about the territory. At least assuming a few simple applications of Occam’s Razor like “the territory exists.”
I’m not sure what is meant by “identical, statistically at least”.
Also, I doubt that Occam’s Razor even deals with territory. Isn’t it advise on map selection? Choosing between maps based on their desirable properties as maps, rather than based on how well they seem to empirically match up with the territory?
Our maps are necessarily finite. (there is only a finite amount of stuff in our brain. Saying that there is an infinite stuff in our brain in a meaningful sense leads to bad predictions.)
There may well be uncountable things in “the territory.” For example, using real numbers to describe distances works pretty well.
We can have countable descriptions of the real numbers that allows us to use complicated models in ways that are consistent and helpful. But they aren’t COMPLETE descriptions of the real numbers.
This isn’t really “statistical” identification but it is similar in spirit.
Ah. So you are interpreting your statement:
as saying something like “maps can be error-free finite approximations of the territory—deficient only in terms of poor resolution”. Yes, indeed. Maps can conceivably be completely correct in this sense. In which case, talking about the map does, in some sense, indirectly talk about the territory.
However, I would emphasize that you cannot know that a map is correct in this sense. And that two people may agree on what a map says and how to speak about places on the map—“Lets stop at that rest area symbol about half an inch north of the state line”—if they don’t agree that the map correctly represents the territory, then they are not talking about the territory.
Note: he’s not me, even though our names both start with “ma” :P
I meant that even though we cannot put into words the exact nature of reality, our words, which are about our maps, can still tell other people things about the territory. For example, if I say “it’s raining,” you could go through “well, P(raining|he said that) is about 0.9, so I’ll grab an umbrella.”
So my opinion is that the territory isn’t especially unspeakable—nothing’s perfect.
Thx for the clarification of identity; i was confused. :(
Yes, your example does seem to illustrate the transfer of information about the territory between minds by use of language.
But when you ask “Where did that number 0.9 come from?” things get more complicated.
In my view, 0.9 is a statistic representing a correlation between your map and my map. Territory doesn’t even come into it—at least not directly. Suppose I have come up with that 0.9 estimate by keeping track of how often our statements “It is raining” or “It is not raining” agree or disagree. “Why the discrepancy?”, I ask myself. Do you sometimes lie? Do you mean “The streets are wet” whereas I mean “Water is falling”? Are you talking about rain falling at a different location?
We can conduct a discussion to help determine which of these hypotheses most completely explain the discrepancy. In conducting that discussion, we will be talking about our maps. We don’t need territory to discuss these hypotheses. We can do it by discussing thought experiments involving hypothetical maps (as you and I are doing now!).
But, you might object, the hypothesis which would justify the Bayesian inference involving the umbrella has to involve some kind of shared territory underlying our maps. Well, maybe it does. But, I claim that we cannot talk about the nature of that shared territory. All we can do is to construct a shared map.