The history of philosophy can’t really have been one of thousands of years of nearly unrelenting adoration of stupidity. What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough. There is a bandpass filter on philosophy, and it has both a low and a high cutoff.
We propagate knowledge by collective judgements about it. In fields where we can’t eliminate bad ideas by experiment, both the very worst and the very best ideas must be rejected. The requirement that an influential philosopher appeal to a large group of philosophers guarantees that relatively simplistic, self-aggrandizing or at least inoffensive crap with enough fuzziness to give one leeway in how to interpret it will be favored over careful, complex, a-polite ideas.
I recently looked at a bunch of my grad-school AI textbooks. It made me ill to think how many years I wasted studying an entire discipline filled with almost nothing but knowledge that has so far proven useless to me across a wide range of problems and disciplines for anything other than writing computer games—and useful there only because you can scale the game down and restrict its environment until the techniques work. Is this a different way of going wrong than the philosophers, or is it the same thing? Many of the bad-old-fashioned-AI (BOFAI) way of doing things are quite difficult: You can’t accuse Kripke or Quine of being simplistic.
I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.
However, my experience on LW has shown that the best and brightest people are still very bad at conveying even relatively simple ideas to each other.
I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can’t communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year’s worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence. I wish there were some Twitter version of Science, that published only pithy, insightful comments, unsubstantiated by experiment. But since I’ve also seen cases where researchers spent decades gathering data and publishing critiques in their field and getting no traction, this alone is not enough.
How can we use the internet to recognize good ideas and get them to the people who can use them? Cross-discipline reputation brokers could be part of the solution.
What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough.
On the contrary, philosophers became popular only if their ideas were complicated enough to fill a book. The ideas that were simple enough to be true were also too short to publish.
Eg., I’m not interested in hearing that medieval philosophers ignored the idea that the motion of the planets are governed by the same laws that govern the motion of bodies on earth.
I have also seen instances where nearly an entire field is making some elementary
error, which people outside that field can see more clearly, but which they can’t
communicate to people in that field because they would have to spend years learning >enough about the field to write a paper, probably with half a year’s worth of
experimental work, and not get rejected, even if their insight is something that could be
communicated in a single sentence.
I for one would be interested in hearing these sentences, and also which fields you feel are being held back by simple errors of logic. The margins here are quite large ;).
Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky’s article on Skinner’s Verbal Behavior).
To be fair, I’ll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don’t know information theory very well.
Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.
In the field of sciencology?, or whatever you call the people who try to scientify science (eg., “We must make science more efficient, and only spend money discovering those things that can be successfully utilized”), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.
The FDA, in its decision whether to allow a drug on the market, doesn’t do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person’s life. They know this. It’s been pointed out many times, sometimes by people in the FDA. Yet nothing changes.
EDIT: Probably a bad example. The FDA’s motivational structure is usually claimed to be the cause of this.
Maybe when one particular stupidity thrives in a field, it’s because it’s a really robust meme for reasons other than accuracy. There are false memes that can’t be killed, because they’re so appealing to some people. For example, “Al Gore said he invented the Internet”—a lie repeated 3 times by Wired that simply can’t be killed, because Republicans love it. “You only use 1/10th of your brain” - people love to imagine they have tremendous untapped potential. “Einstein was bad at math”—reassures people that being good at math isn’t important for physics, so it’s probably not important for much.
So, for example, NASA keeps trying to get ET’s attention, not because it’s rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn’t picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.
These are interesting examples, but they’re not what I envisioned from your original comment. (The Brooks example might be, but it’s the vaguest.)
A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky’s followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what’s really going on. But the bluster may have tricked insiders, too.
Also, “You don’t understand information theory,” while one sentence, is not a very effective one.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it’s hard to find a set of assumptions under which there’s a nontrivial probability that it matters.
I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.
I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can’t communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year’s worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence.
I think that you’re saying that the outsiders can’t be published without learning the jargon and doing experiments. But publication is not the only avenue. If it really only takes a single sentence, the outsider should be able to find an insider who will look past jargon and data and listen to the sentence. Then the insider can tell other insiders, or tack it onto a publication, or do the new experiments.
If jargon is not just a barrier to publication, but also to communication it’s a lot harder to find a sympathetic insider, but it hardly seems impossible. Also, in that situation, how can outsiders be sure they understand?
These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don’t care about seeking truth, only about having a routine.
“These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don’t care about seeking truth, only about having a routine.”
Well, a large part of it is funding/bureaucracy/grants. I tend to thing that’s the main part in many of these fields. Look at Taubes’s Good Calories, Bad Calories for a largely correct history of how the field of nutrition went wrong and is still going at it pretty badly. You do have a growing number of insiders doing research not on the “wrong” path and you did all along, but they never got strong enough to challenge the “consensus” and it’s due not just to the field but the forces outside the field (think tanks, government agencies, media reports). So even being published and well-known isn’t enough to change a field.
I was talking about the content of artificial intelligence books published in the 1980s. None of the examples you gave involved anything from the BOFAI school of artificial intelligence; nothing that would have been in those books.
The history of philosophy can’t really have been one of thousands of years of nearly unrelenting adoration of stupidity. What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough. There is a bandpass filter on philosophy, and it has both a low and a high cutoff.
We propagate knowledge by collective judgements about it. In fields where we can’t eliminate bad ideas by experiment, both the very worst and the very best ideas must be rejected. The requirement that an influential philosopher appeal to a large group of philosophers guarantees that relatively simplistic, self-aggrandizing or at least inoffensive crap with enough fuzziness to give one leeway in how to interpret it will be favored over careful, complex, a-polite ideas.
I recently looked at a bunch of my grad-school AI textbooks. It made me ill to think how many years I wasted studying an entire discipline filled with almost nothing but knowledge that has so far proven useless to me across a wide range of problems and disciplines for anything other than writing computer games—and useful there only because you can scale the game down and restrict its environment until the techniques work. Is this a different way of going wrong than the philosophers, or is it the same thing? Many of the bad-old-fashioned-AI (BOFAI) way of doing things are quite difficult: You can’t accuse Kripke or Quine of being simplistic.
I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.
However, my experience on LW has shown that the best and brightest people are still very bad at conveying even relatively simple ideas to each other.
I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can’t communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year’s worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence. I wish there were some Twitter version of Science, that published only pithy, insightful comments, unsubstantiated by experiment. But since I’ve also seen cases where researchers spent decades gathering data and publishing critiques in their field and getting no traction, this alone is not enough.
How can we use the internet to recognize good ideas and get them to the people who can use them? Cross-discipline reputation brokers could be part of the solution.
On the contrary, philosophers became popular only if their ideas were complicated enough to fill a book. The ideas that were simple enough to be true were also too short to publish.
An interesting possibility. (Nitpick: “Simple enough to be true” implies that complex ideas can’t be true. This is wrong.)
Can you give an example of a simple but non-obvious truth that was available but passed over in philosophy?
What do you mean by “available”?
Eg., I’m not interested in hearing that medieval philosophers ignored the idea that the motion of the planets are governed by the same laws that govern the motion of bodies on earth.
So, are we looking for something which is:
Simple,
True,
Not obvious,
Was claimed as true by someone or other,
But mostly ignored?
Perhaps Aristarchus and his heliocentrism would fit the bill (while not strictly true, it was truer than the alternative).
I for one would be interested in hearing these sentences, and also which fields you feel are being held back by simple errors of logic. The margins here are quite large ;).
Some examples off the top of my head:
Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky’s article on Skinner’s Verbal Behavior).
To be fair, I’ll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don’t know information theory very well.
Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.
In the field of sciencology?, or whatever you call the people who try to scientify science (eg., “We must make science more efficient, and only spend money discovering those things that can be successfully utilized”), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.
The FDA, in its decision whether to allow a drug on the market, doesn’t do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person’s life. They know this. It’s been pointed out many times, sometimes by people in the FDA. Yet nothing changes.
EDIT: Probably a bad example. The FDA’s motivational structure is usually claimed to be the cause of this.
Maybe when one particular stupidity thrives in a field, it’s because it’s a really robust meme for reasons other than accuracy. There are false memes that can’t be killed, because they’re so appealing to some people. For example, “Al Gore said he invented the Internet”—a lie repeated 3 times by Wired that simply can’t be killed, because Republicans love it. “You only use 1/10th of your brain” - people love to imagine they have tremendous untapped potential. “Einstein was bad at math”—reassures people that being good at math isn’t important for physics, so it’s probably not important for much.
So, for example, NASA keeps trying to get ET’s attention, not because it’s rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn’t picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.
This, and your comment below, should be top-level posts IMO.
These are interesting examples, but they’re not what I envisioned from your original comment. (The Brooks example might be, but it’s the vaguest.)
A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky’s followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what’s really going on. But the bluster may have tricked insiders, too.
Also, “You don’t understand information theory,” while one sentence, is not a very effective one.
People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it’s hard to find a set of assumptions under which there’s a nontrivial probability that it matters.
TED.
I think that you’re saying that the outsiders can’t be published without learning the jargon and doing experiments. But publication is not the only avenue. If it really only takes a single sentence, the outsider should be able to find an insider who will look past jargon and data and listen to the sentence. Then the insider can tell other insiders, or tack it onto a publication, or do the new experiments.
If jargon is not just a barrier to publication, but also to communication it’s a lot harder to find a sympathetic insider, but it hardly seems impossible. Also, in that situation, how can outsiders be sure they understand?
These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don’t care about seeking truth, only about having a routine.
“These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don’t care about seeking truth, only about having a routine.”
Well, a large part of it is funding/bureaucracy/grants. I tend to thing that’s the main part in many of these fields. Look at Taubes’s Good Calories, Bad Calories for a largely correct history of how the field of nutrition went wrong and is still going at it pretty badly. You do have a growing number of insiders doing research not on the “wrong” path and you did all along, but they never got strong enough to challenge the “consensus” and it’s due not just to the field but the forces outside the field (think tanks, government agencies, media reports). So even being published and well-known isn’t enough to change a field.
It’s just you.
I was talking about the content of artificial intelligence books published in the 1980s. None of the examples you gave involved anything from the BOFAI school of artificial intelligence; nothing that would have been in those books.