Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky’s article on Skinner’s Verbal Behavior).
To be fair, I’ll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don’t know information theory very well.
Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.
In the field of sciencology?, or whatever you call the people who try to scientify science (eg., “We must make science more efficient, and only spend money discovering those things that can be successfully utilized”), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.
The FDA, in its decision whether to allow a drug on the market, doesn’t do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person’s life. They know this. It’s been pointed out many times, sometimes by people in the FDA. Yet nothing changes.
EDIT: Probably a bad example. The FDA’s motivational structure is usually claimed to be the cause of this.
Maybe when one particular stupidity thrives in a field, it’s because it’s a really robust meme for reasons other than accuracy. There are false memes that can’t be killed, because they’re so appealing to some people. For example, “Al Gore said he invented the Internet”—a lie repeated 3 times by Wired that simply can’t be killed, because Republicans love it. “You only use 1/10th of your brain” - people love to imagine they have tremendous untapped potential. “Einstein was bad at math”—reassures people that being good at math isn’t important for physics, so it’s probably not important for much.
So, for example, NASA keeps trying to get ET’s attention, not because it’s rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn’t picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.
These are interesting examples, but they’re not what I envisioned from your original comment. (The Brooks example might be, but it’s the vaguest.)
A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky’s followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what’s really going on. But the bluster may have tricked insiders, too.
Also, “You don’t understand information theory,” while one sentence, is not a very effective one.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it’s hard to find a set of assumptions under which there’s a nontrivial probability that it matters.
Some examples off the top of my head:
Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky’s article on Skinner’s Verbal Behavior).
To be fair, I’ll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don’t know information theory very well.
Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.
In the field of sciencology?, or whatever you call the people who try to scientify science (eg., “We must make science more efficient, and only spend money discovering those things that can be successfully utilized”), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.
NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they’re still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.
Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.
The FDA, in its decision whether to allow a drug on the market, doesn’t do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person’s life. They know this. It’s been pointed out many times, sometimes by people in the FDA. Yet nothing changes.
EDIT: Probably a bad example. The FDA’s motivational structure is usually claimed to be the cause of this.
Maybe when one particular stupidity thrives in a field, it’s because it’s a really robust meme for reasons other than accuracy. There are false memes that can’t be killed, because they’re so appealing to some people. For example, “Al Gore said he invented the Internet”—a lie repeated 3 times by Wired that simply can’t be killed, because Republicans love it. “You only use 1/10th of your brain” - people love to imagine they have tremendous untapped potential. “Einstein was bad at math”—reassures people that being good at math isn’t important for physics, so it’s probably not important for much.
So, for example, NASA keeps trying to get ET’s attention, not because it’s rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn’t picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.
This, and your comment below, should be top-level posts IMO.
These are interesting examples, but they’re not what I envisioned from your original comment. (The Brooks example might be, but it’s the vaguest.)
A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky’s followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what’s really going on. But the bluster may have tricked insiders, too.
Also, “You don’t understand information theory,” while one sentence, is not a very effective one.
People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it’s hard to find a set of assumptions under which there’s a nontrivial probability that it matters.