What science needs
Science does not need more scientists. It doesn’t even need you, brilliant as you are. We already have many times more brilliant scientists than we can fund. Science could use a better understanding of the scientific method, but improving how individuals do science would not address most of the problems I’ve seen.
The big problems facing science are organizational problems. We don’t know how to identify important areas of study, or people who can do good science, or good and important results. We don’t know how to run a project in a way that makes correct results likely. Improving the quality of each person on the project is not the answer. The problem is the system. We have organizations and systems that take groups of brilliant scientists, and motivate them to produce garbage.
I haven’t got it all figured out, but here are some of the most-important problems in science. I’d like to turn this into a front-page post eventually, but now I’m going to post it to discussion, and ask you to add new important problems in the comments.
Egos
A lot of LWers think they want to advance scientific understanding. But I’ve learned after years in the field that what most scientists want even more is prove how smart they are.
I couldn’t tell you how many times I’ve seen a great idea killed because the project leader or someone else with veto power didn’t want someone else’s idea or someone else’s area of expertise to appear important. I’ve been “let go” from two jobs because I refused when my bosses flat-out told me to stop proposing solutions for the important problems, because that was their territory.
I don’t mean that you should try to stop people from acting that way. People act that way. I mean you should admit that people act that way, and structure contracts, projects, and rewards so that these petty ego-boosts aren’t the biggest rewards people can hope to get.
Too many “no”-men
The more people your project has who can say “no”, the worse the results will be. This is one reason why Hollywood feature films are stupid, why start-ups do good work, and why scientific projects are so often a waste of money. Good ideas are inherently unpopular. Most of the projects that I’ve worked on have been crippled because every good idea ran into someone with veto power who didn’t want to do things differently, or didn’t want somebody else to get credit for solving the problem. See “Egos”.
Saying “no” to bad projects is important, but once the project is underway, there is a bias to say “no” more than “yes”, even after adjusting for the number of times you can say “yes” in total. Requiring consensus is especially pernicious. You can’t get good results when everbody on the project has to say “yes” to new ideas.
Jurisdiction arguments
Team members often disagree about whose expertise particular decisions fall under. Most people see how their expertise applies to a problem more easily than they can see how someone else’s expertise applies to a problem. What usually happens is that territorial claims are honored from the top of the org chart on down, and by seniority. For example, I worked for a computer game company where the founder hired a scriptwriter, then came up with his own story ideas and told the scriptwriter to implement them. The implementation had no text; the scriptwriter took the story ideas and produced descriptions of scenes acted out with body language. The animators thought that body motion fell completely within their jurisdiction, so they felt free to rework whatever they saw differently. The scriptwriter had very little chance for creative input, no control over anything, and very little job satisfaction.
This is a common problem for computer scientists and mathematicians. Computer scientists and mathematicians see themselves as people who understand how to most-effectively take a set of data, and arrive at the desired results. This includes figuring out what data to look at, and in the best case, means being involved in the proposal writing to look at possible problems to address, and determine which problems are soluble and which ones are not based on information theory. This never happens. People in other specialties see computer scientists as a kind of lab technician to bring on after they’ve figured out what problem to address, and what data and general algorithm to use. They see statisticians as people to consult when the project is done and they’re writing up the results. They aren’t even aware that these other disciplines can do more than that.
A classic example is the Human Genome Project. Some people you never hear about, including my current boss, came up with algorithms to take whole-genome shotgun data and assemble it. Craig Venter went to the leaders of the Human Genome Project and explained to them that, using this approach, they could finish the project at a fraction of the cost. Anybody with a little mathematical expertise could look at the numbers and figure out on the back of a napkin that, yes, this could work. But all the decision-makers on the HGP were biologists. I presume that they didn’t understand the math, and didn’t believe that mathematicians could have useful insights into biological problems. So they declared it impossible—not difficult, but theoretically impossible—and plowed ahead, while Craig split off to use the shotgun approach. Billions of taxpayer dollars were wasted because a few people in leadership positions could not recognize that a problem in biology had a mathematical aspect.
Muzzling the oxen
“Thou shalt not muzzle the ox when he treadeth out the corn.” — Deuteronomy 25:4
I believe that a large number of the problems with scientific research are tolerated only because nothing is at stake financially. Government agencies have tried very hard to ensure that people do work for their contracts. You have to say in the proposal what you’re going to do, and itemize all your costs, and do what you said you would do, and write reports once a month or once a quarter showing that you’re doing what you said you would do. This results in unfortunate obvious stupidities. We can spend $30,000 to have an employee write a piece of software that we could have bought for $500, or to solve a problem that a consultant could have solved for $500, but we can’t buy the software or hire the consultant because they aren’t listed in the contract and the employee is.
But the bigger problem is that the strict financial structure of scientific research makes it illegal to motivate scientists by giving them a percentage of resulting profits. You simply can’t write up a budget proposal that way. So managers and team members indulge their prejudices and fantasies because the little bit of self-esteem boost they get from clinging to their favorite ideas is worth more to them than the extra money they would earn (zero) if the project produced better results. Examples of petty prejudices that I’ve seen people wreck good work to preserve: top-down over bottom-up design, emacs over vim (I was in a shop once where the founders forbade people from using vim, which had an astonishingly destructive effect on morale), rule-based over statistical grammars, symbolic logic over neural networks, linguistics expertise as more important than mathematical expertise, biological expertise as more important than mathematical expertise, and, always, human opinions gathered from a few hundred examples as more valid than statistical tests performed on millions of samples.
When I read about machine learning techniques being applied in the real world, half the time it’s by trading firms. I haven’t worked for one, so I don’t know; but I would bet they are a lot more receptive to new ideas because, unlike scientists, they care about the results more than about their egos. Or at least, an appreciable fraction as much as they care about their egos.
Entry costs
Everybody in science relies on two metrics to decide who to hire and who to give grants to: What their recent publications are, and what school they went to. It is possible to go to a non-top-ranked school and then get on important projects and get publication credits. Someone who just left our company on Friday worked the magic of cranking out good research publications while working as a programmer, always taking on only projects that had good publication potential and never getting stuck with the horrible life-sucking, year-sucking drudgery tasks of, say, converting application X from using database Y to database Z. I just don’t know how he did it.
For the most part, that doesn’t happen. You don’t become a researcher; you start out as a researcher. You need to stay in school, or stay on as a postdoc, until you have your own track record publications and have won your own grant. You need people to read those publications. You don’t get to work on important projects and get your work read and get a grant because you’re brilliant. You get these things because your advisor works the old-boy network for you. Whatever your field is, there is a network of universities that are recognized as leaders in that field, and you are more-or-less assured of failure in your career (especially in academia or research) unless you go to one of those universities, because you won’t get published in good journals, you won’t get read much, and you won’t get a big grant.
There are exceptions. Fiction writers and computer programmers don’t need to go to a fancy university; they need credits and experience. (Computer programmers. But don’t get a Ph.D. in computer science from a non-elite university and imagine you’re going to do research; it won’t happen.) Good stories can sort of be recognized; basic knowledge about Enterprise Java can be measured. Companies have recognized the monetary value of doing so. But grant review panels and companies don’t really know how to rate scientists or managers, so they try to get somebody from MIT or from Wharton, because nobody ever got fired for buying a Xerox.
The value of scientists to their companies may or may not be reflected in their salaries, but the value of those select universities is certainly reflected in the price of tuition. If your college of choice costs you less than $55,000/yr to attend, including room and board, it will not lead you to success. Unfortunately, the U.S. government won’t loan you more than $10,000/yr for tuition.
(One interesting exception is in cosmology. I did a study of successful physicists, as measured by their winning the Nobel or being on the faculty at Harvard. I found that after 1970, no one was successful in physics unless they went to an elite undergraduate college, with a few exceptions. The exceptions were astrophysicists who went to college in Arizona or Hawaii, where there are inexpensive colleges that are recognized as leading institutions in astronomy because they have big telescopes.)
Search
The single-biggest problem with science today is finding relevant results. I have had numerous discussions with experts in a field who were unaware of recent (and not-so-recent) important results in their field because they relied on word-of-mouth and a small set of authoritative journals, while I spent half an hour with Google before our meeting. To take a spectacularly bad example, the literature showing that metronidazole kills Borrelia burgdorferi cysts, while penicillin, doxycycline, amoxicillin, and ceftriaxone do not, is over ten years old; yet metronidazole is never prescribed for Lyme disease while the latter are.
Attention is the most-valuable resource in the twenty-first century. Producing a significant result is not hard. Getting people to pay attention to it is. Scientometric analysis of scientific publications shows that producing more and more papers in a field has very little impact on the number of papers cited (a proxy for number of results used), probably because scientists basically read up to one paper per day chosen from one or two leading journals, and that’s it. They aren’t in the habit of regularly, actively searching for things relevant to their work; and frankly, there isn’t much motivation to do that, since using Google to answer a specific question is like using excavation equipment to search for a needle in a haystack.
- 1 Sep 2014 17:40 UTC; 18 points) 's comment on September 2014 Media Thread by (
- 18 Jan 2014 16:44 UTC; 1 point) 's comment on Division of cognitive labour in accordance with researchers’ ability by (
You fell afoul of Putt’s First Law of Decision Making: Managers make decisions.
Can you ask him?
I’ve asked him twice, but didn’t get a clear answer beyond “Get out of dead-end departments that aren’t intellectually sexy.”
I’m guessing that people are relatively good at identifying departments which aren’t intellectually sexy, but may not have the tools for getting out of them. Have you asked him about the latter?
Another possibility is that they could figure out for themselves how to stay out of those departments, but they believe that the right thing to do is not take initiative.
A third is that if more people tried to stay out of dead end work, the pressure to take it would increase.,
Most of my colleagues seem to want to burrow deep into their research niche and start counting their pension. The idea of even wanting to switch departments or even rock the boat by refusing a project offered by a superior is antithetical to their psychology.
What I’m saying is that some mix of ambition and risk tolerance is required to even see the options for career development clearly.
So, what I have noticed is that professors primarily work as salespeople (writing grant proposals), managers (as principal investigators), and educators (teaching classes), but they typically have formal training for zero of those three tasks. A number of professors I know have commented that they were happier as postdocs (read: spending lots of time in the lab) than they are now as professors, and I can’t help but think that there are gains from specialization to had here. There’s some overlap with the problems you describe, but those are the things I would emphasize to get support from scientists in the system when pushing institutional change.
Implied premise here is that science is unproductive today. In physics we’ve found the Higgs boson. In math, we’ve proven the fundamental lemma of the Langlands program). In astronomy, we’ve found multiplanet star systems. In quantum computing, we’ve made major progress in practical implementation of factoring algorithms. This is all in the last three years and is essentially off the top of my head. Given that, the claim that science is “so unproductive” today seems at minimum to be a claim which shouldn’t be made without some evidence to support it.
As always, relative to what?
Yes, that’s an important issue certainly. To some extent, scientific and engineering progress clearly feels slower than it seems to have been historically. We don’t have frequent things like the theory of evolution showing up now. But when phrased that way, it seems that much of the slow down is simply that we’ve picked off the low-hanging fruit. We have a pretty good understanding of basic physics and biology today, so the remaining discoveries will be necessarily more incremental.
I’m more interested in prospective comparisons than retrospective comparions.
If you compare the money and manpower put into science today, to fifty and 100 and 150 years ago, you will be astonished at the reduction in productivity.
Probably not. I’ve wrote about how in some respects we are less productive now than we were 100 years ago, although that was in the context of technological, not scientific development. But what your metric is matters a lot in this context. For example, the number of published papers in many fields has been increasing, but many of those papers are complete wastes. I don’ t know a good way to measure that. If your timeperiod to compare is that far back, then I’m almost more concerned about how you know what the cause is of a drop in productivity than anything else. In particular, how one can tell that this isn’t just the low-hanging fruit problem mentioned earlier.
I didn’t say that it wasn’t the low-hanging fruit problem. That is probably part of the problem. I don’t think it’s the biggest part. The biggest part of the problem, IMHO, isn’t anything that anyone did wrong; it’s that scientific output is inherently proportional to the log of resources spent. This is the low-hanging fruit problem when you’re talking about equipment cost in high-energy physics, but it’s the search problem when you’re talking about many other fields.
So how would you distinguish between the different causes and how much it is one cause or another? If for example, much of the problem is low hanging fruit then one would expect that the overall scientific productivity level would in some sense slow down over time. If most of it is the sort of systemic issues you are discussing what observations could we make to test the claim?
I have a lot of evidence. On a per-dollar basis, science today is many orders of magnitude less productive than it was a century ago. I have a paper in draft I can email you.
Frankly, a draft of that paper would be far more interesting than this post itself. I’m curious what your metric is. Denominators is dollars, and numerator is what?
Email my username at gmail.com, and I’ll send it to you.
ADDED: Now online here. Please leave comments there if you have any. It looks like you can’t do line-by-line comments on a Word document in google docs, though.
Done.
Alternatively, scientific problems might have got a lot harder! Compare the sheer amount of maths needed to understand quantum mechanics compared to something like gravitation.
(and I’m assuming you’re taking into account inflation etc.)
As far as I know, general relativity isn’t any mathematically simpler than quantum mechanics is.
I think in context bryjnar meant simply Newtonian gravity.
Yep.
You ought to compare it to quantum field theory, not to non-relativistic quantum mechanics.
If you work at the NSA, aren’t you supposed to pretend afterwards that you didn’t? Explain the hole in your CV some other way? Or is that no longer the custom?
That was years ago. I think that changed in the eighties. You can’t list classified publications on your CV, or explain what you did, which is still a problem, and a big reason why I left 20 years ago. Ironically, I probably would have more unclassified publications by now if I’d stayed.
This seems like a really important question that a lot of people might benefit from knowing the answer to. How does one “manage upwards” and ensure that one gets put on the interesting, rewarding projects as opposed to the tedious stuff?
Well, I know at least some researchers simply give the tedious stuff to someone else. You can hire an external group of people (such as my company, I work in this field) and tell them “Check this dataset/program, for common errors.” and get back a list of things like “These 30 people report both Non-smoking status and 30 pack-years.” You can also have them do things like “Check this paper draft for publication.” “Collect these 12 different types of data into a single dataset.” “Generate these tens of thousands of forest plots.” etc. And yes, sometimes there is a “Convert this database type into this other database type.”
Since I’m a computer guy and not a researcher, the data grinding isn’t really any more tedious than any other job, and if anyone does acknowledge/coauthor me in a paper, it’s a pleasant surprise.
I feel like this is a rather longwinded way of saying comparative advantage though. A shorter answer is to find someone who is relatively better at tedious stuff and give them the task.
Making a note of this; Lyme disease is common enough that this would be useful to remember. (Or, because I’m pretty sure I’ll forget the specifics, to remember that I can search my LW comment history for it.) Thanks!
Here’s an interesting test of various antibiotics: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3132871/
It looks like different agents are more or less effective for different forms of the disease. This study didn’t look at Ceftriaxone as far as I can tell, which is what is usually used for later stage Lyme disease. Doxycycline is typically used for prevention, which makes sense because you’re probably less likely to have cysts forming immediately after infection. It doesn’t look like Metronidazole is as effective against non-cyst form Lyme disease.
and
Looks like Tinidazole might be even better than Metronidazole vs. cyst-form Lyme disease.
Lyme is a curious case, because most of the findings that researchers act unaware of are findings they are aware of but choose to ignore. There is a small group of doctors, most prominently Gary Wormser, who are determined to prove that chronic Lyme does not exist, despite a great deal of evidence to the contrary. To continue defending his position, Wormser has been reduced to claiming (6 years ago, in the IDSA recommendations, which govern most Lyme treatment in the US) that if you do a Western blot and an ELISA test, and either one of them comes back negative, that should override a positive result from PCR (which pretty much never gives false positives unless you really screwed up the primer design) or even visible detection of spirochetes via microscopy (on the grounds that Borrelia can’t be visibly distinguished from the microbes for leptospirosis or syphilis, even though all three have similar recommended treatments). Wormser and his gang dismiss all histological studies of the effects of antibiotics on Borrelia, every one of which has shown that antibiotics reduce the number of spirochetes, but don’t eradicate them. They claim that the dosage and pharmacodynamics were not compared correctly, and so their theories should be taken as stronger proof than experiments in animals. They also use the “no-man” principle above: One of their papers “proves” that patients don’t have Lyme by taking a group of patients, applying 5 different tests, and concluding that they don’t have Lyme if any one of the tests is negative, without any consideration of the false negative rates.
Changing the clinical guidelines to include additional antibiotics, on the grounds that they kill cysts, would mean admitting that some patients in the past might not have been cured of Lyme by antibiotic treatment without those additional drugs. That would invalidate many of the papers published on Lyme by the members of the committee that wrote the guidelines.
Lyme is, I think, the only disease in the world where patients routinely are diagnosed to have the disease by undisputed clinical evidence, get treatment, come back to their doctors and say “But I still have the symptoms!”, and are told that it’s all in their head because a paper proves that they’ve been cured. A key problem is that the accepted clinical tests are presence of the bulls-eye rash (which happens only on infection in most cases) and a positive blood serum assay (but the spirochetes live inside cells and may not trigger a blood immune response after dispersion). The only evidence most patients have that they’re still sick is that they still feel sick. Wormser et al. spend their time coming up with ways to discredit any clinical tests that have any chance of diagnosing late-stage Lyme.
Completely coincidentally, admitting that chronic Lyme exists, and, worse yet, coming up with an accepted way of diagnosing it, could instantly create a pool of hundreds of thousands of Americans covered by insurance policies that would then be obligated to pay indefinitely for treatment that costs up to $1000/month for oral amoxicillin, and perhaps $10,000/month for the more-effective intravenous drugs.
Wormser’s most recent paper argues that if a patient has a negative Lyme ELISA test on blood serum, then positive tests for Lyme based on immunological assays of synovial fluid, and also positive PCR tests, should be ignored, because all patients in a prior study of patients with Lyme who had positive blood serum tests had positive blood serum tests. Yes, you read that right.
While the existence of chronic Lyme is controversial, the current consensus seems to be closer to it not existing than it existing.
If you’ll look at the history page of that Wikipedia page, you’ll see that I’ve tried twice to edit out the egregious bias and lies in that particular Wikipedia page. Look at the articles cited, and look at who was on the panels cited. Most of the papers and all of the panels concluding there is no chronic Lyme were by the same people, who have all co-authored papers with each other: Wormser, Halperin, Feder, and Shapiro. The “Major US medical authorities, including the Infectious Diseases Society of America, the American Academy of Neurology, and the National Institutes of Health,” are all the same people. The Connecticut Attorney General’s investigation concluded that the IDSA panel was formed with prejudice and that Wormser excluded doctors who did not agree with him. The AAN panel was largely the same people as the IDSA panel. The NIH merely cited the IDSA and AAN guidelines without conducting any independent study. Look at everything said on that page without any citation or support; most of those statements are false.
Can you point to specific statements that you think are false?
(Edit: Incidentally, I can give quite a few other sources which agree on which way the consensus leans other than Wikipedia.)
Look at the history page and see what I tried to change. The page originally had “chronic lyme disease” in quotes, implying that it did not exist. “Most medical authorities advise against long-term antibiotic treatment” has 3 citations, but all of them refer to statements by the same small group of doctors, who have consistently masqueraded as “most medical authorities” here and in journal articles. The following statements say that chronic Lyme does not exist, but again all the citations are to the writings of that same small group of doctors. The article dismissing the fact that animal studies always find antibiotics do not eliminate Borrelia was also written by those same doctors. Then the page says, “Major US medical authorities, including the Infectious Diseases Society of America, the American Academy of Neurology, and the National Institutes of Health, have stated there is no convincing evidence that Borrelia is involved in the various symptoms classed as chronic Lyme disease, and advise against long-term antibiotic treatment as ineffective and possibly harmful.” The four citations given again refer only to the writings of and panels led by that small group of doctors. The statement that “Prolonged antibiotic therapy presents significant risks and can have dangerous side effects” is not supported in the Lyme literature; the risks are associated with intravenous therapy, while most patients have oral medication, and the risks of taking the antibiotics are less than the risks of not taking them.
Then we come to the “Controversy and politics section”. It says,
When I modified the page to add a statement pointing out that every one of those citations and all of the authorities cited to claim that there is no chronic Lyme disease trace back to these five doctors, my statement was twice redacted, on the grounds that I was violating Wikipedia policy by combining information from multiple sources in a way that is not present in any of the sources. That is, because no one of the papers says that all of the papers were written by the same people, I was not allowed to point out that all of the papers were written by the same people!
“A minority view holds that chronic Lyme disease is responsible for a range of unexplained symptoms, sometimes in people without any evidence of past infection.”—I doubt that this is a minority view among doctors familiar with Lyme. Citation needed.
“as to argue for insurance coverage of long-term antibiotic therapy, which most insurers deny, as it is at odds with the guidelines of major medical organizations”—no citation to indicate that most insurers deny long-term antibiotic therapy, and I don’t think this is true anymore. Blue Cross & Blue Shield allows it.
The section on the IDSA guidelines antitrust investigation is not false, but is remarkably pro-IDSA considering the findings of the investigation, which were that the panel was rigged and the science was bad.
“A 2004 study in The Pediatric Infectious Disease Journal stated 9 of 19 Internet websites surveyed contained what were described as major inaccuracies. Websites described as providing inaccurate information included several with the word “lyme” in their domain name (e.g. lymenet.org), as well as the website of the International Lyme And Associated Diseases Society”—citation is to an article by Feder.
“A 2008 article in The New England Journal of Medicine argued media coverage of chronic Lyme disease ignored scientific evidence in favor of anecdotes and testimonials”—citation to article by Feder & Johnson.
In conclusion, there are no citations in the entire Wikipedia article to claims that chronic Lyme does not exist, except to papers co-authored by or panels led by Wormser, Feder, Halperin, and Shapiro; and I was not allowed to point that out.
Excellent. Thanks. This will take me some time to look at in detail, but my initial impression from reading this is that you’ve got a decent set of examples here.
Being from the relevant geographic region, that sounds intuitively implausible. I know lots of people who had Lyme disease, and most of them had symptoms after treatment. I feel as skeptical as if you’d told me that influenza is a genetic disorder.
The claim isn’t though that there aren’t post-disease symptoms. The issues are a) whether they can last for years after (rather than months) and b) if they are caused by still living bacteria.
It seems as though the controversy is over what to call the syndrome, not whether it exists:
http://www.cdc.gov/lyme/postLDS/
Not really. PTLDS is over a much shorter time span than what is often being called chronic Lyme. The claims about chronic Lyme are generally for a disease lasting on the order of years.
Also, PTLDS is a way of saying that people have the same symptoms, but you shouldn’t give them antibiotics anymore, because they don’t have bacteria, they have an autoimmune disorder, or crystallized dead bacteria that irritate the tissues, or some other novel, theoretical disorder (which is in every case less likely than the simple alternative that they still have bacteria).
Given that each one of the 6 or so experiments I’ve read about performed in vitro or in vivo show that antibiotics rarely kill all the bacteria, I don’t understand why it’s so hard to believe that some bacteria survive antibiotic treatment.
There do seem to be cases of that. Recent Lyme-related news suggests that such cases may be caused by repeated infections. This would seem to contradict the “no antibiotics” advice—at least for these patients. These were patients with multiple rashes, though—chronic Lyme disease is supposed to be different from that.
That is another article by Wormser. He studied cases of people who had repeated EM (bullseye rashes). The methodology is actually clever, looking at the OpsC protein. But of course he knows this is going to be cited out of context to imply that people with chronic Lyme are really just being re-infected. As you noticed, the bullseye rash usually occurs immediately after infection. Most people with chronic Lyme don’t get repeated rashes, and this study is irrelevant to them.
This makes sense — people who have gotten Lyme are not much more likely than others to move out of the Northeast U.S. or stay out of the woods, those being pretty much how to avoid exposure.
The article I cited suffers somewhat from sampling error though. You can’t really sample from those with migrating rashes—and then draw conclusions about the prevalence of repeat infections as an explanation for chronic cases.
Would you mind identifying this paper?
The conspiracy theory cuts both ways, though: a common conspiracy theory holds that massive numbers of people are being pumped full of painkillers and drugs that they don’t need by a multinational conspiracy between the drug companies and the FDA.
Even if believing one meant you had to believe the other, that would merely set the drug companies and insurance companies at odds. Which makes sense, given that one is trying to extract money from the other.
It’s conceivable that once there’s a group receiving expensive treatments, there’s a constituency to continue and expand those treatments, but there’s a constituency to not start spending more money on a merely potential constituency.
Some recent coverage of the controversy: http://www.newyorker.com/reporting/2013/07/01/130701fa_fact_specter?currentPage=all
This all shows the huge advantage to math types interested in academics of becoming an economist over a scientist. We almost never do postdocs, can often do quality research without grant money, and don’t need to publish with senior people if we don’t want to. Plus, I believe the academic job market for economists is pretty good right now.
The author hasn’t posed any scientific problems. Instead, they have made sweeping generalizations based off of their bad experiences in one field.
This referee cannot recommend the article for publication.
Can’t tell if serious or ironically humorous. But the author’s experience includes years of work in each of these fields: linguistics, biology, cryptanalysis, air traffic management, artificial intelligence, and animation. And the author described five specific problems with science. If you don’t understand that the point of the article is that those five problems are the important scientific problems facing us now, then you missed the point entirely. If you’re complaining that you want “scientific problems” instead of problems with how science is done, well, that’s not my job here. I’m identifying the problem, not writing a grant proposal.
What’s the author’s specific experience with Air Traffic Management? (TMU controller, Facility Manager, or alphabet soup router?) When you proposed air traffic solutions, what allowances did you make for the specific knowledge that you lacked, or did you believe that you understood operations two or more steps outside your airspace?
I designed and implemented the prototype (“Build Zero”) for one of NASA’s ATM simulators (I forget what they call it now; it’s been AER, VAMS, VAST, and ACES), and did some work predicting the likely consequences of Free Flight (giving air crews more autonomy and requiring less intervention by controllers). I’ve been to facilities, but never worked at them.
That was one of the projects that helped make me cynical. I spent over 2 years doing almost all of the design, implementation, reporting, meetings, and promotion myself. Then when my hard work paid off, and won Raytheon a $30 million contract, dozens of people jumped on the project. My boss at the time took it over and gave himself (but not me) a giant bonus, and published papers on it but didn’t mention me anywhere in them. I did probably 90% of the work on the first deliverable, which about 6 companies collectively received millions of dollars for, but was never credited anywhere.
My opinion is that Free Flight sounds like a great idea, but almost all problems occur in TRACONs (the area just around an airport, that’s controlled by the airport controllers (EDIT: decius is correct; TRACON controllers and airport controllers are separate)), while the en-route airspace is managed well and doesn’t have nearly as much room for improvement. It probably has more symbolic importance—to get people used to the idea of airplanes having more autonomy. It may also weaken the controllers’ union, which is important in order to improve air traffic efficiency and safety. ATC is a task that humans are bad at and computers are good at, and humans should be doing as little of it as possible.
Let’s get the definitions straight: TRACONs typically exercise control jurisdiction over IFR aircraft and advisory responsibility for participating VFR aircraft from 500′ AGL to roughly 10,000 MSL, mostly in Class E airspace. I suppose that Class B airspace might be the first to be managed properly by computers, since the software wouldn’t have to predict the actions of aircraft that aren’t within its control jurisdiction. (It would still have to deal with emergencies, deviations due to weather, VFR aircraft unable to follow instructions while remaining clear of clouds, and other factors). Any such system that required new equipment in aircraft would fail, simply due to the political power of AOPA and other pilots’ organizations; requiring a $5-10k upgrade to every aircraft simply isn’t on the table.
Computers might be marginally better at handling high-volume routine situations, and a solution that provided computer assistance to pilots and allowed pilots to make more decisions might provide improvements. Computers might be well suited to controlling Class A airspace, because of the relative predictability there and lack of visual separation, but they simply are not well developed enough to provide vectors for a visual approach around clouds, a task which combined tower/TRACON controllers can and do perform. Anything short of a General AI is simply never going to be able to handle the airport environment, even to the point of observing the current weather conditions and selecting a runway configuration, much less to the point of dealing with snow removal situations, wildlife, vehicle or pedestrian deviations, blown tires, and emergency situations.
I also disagree that weakening NATCA is a step towards improving efficiency and safety, and none of the experience that you claimed indicates that you have any basis to observe any of the safety initiatives forced through by the union (like required rest periods, ATSAP reporting, and cultural changes toward a solution-oriented vice a blame-oriented culture.)
And your opinion is factually wrong; more problems occur in the airport environment than in the terminal environment. That’s also where visual separation is most often used and where virtually all emergencies end up being resolved. Small things, like replacing a centerline light on an active runway, can be handled easily by people and their ability to create fuzzy plans; I can not conceive of any traditional computer that could handle that situation without the airport NOTAM closing the runway.
Dude. Chill. By “within the TRACON” I meant to include the airport, which I think of as within the TRACON because I never had anything to do with airport controllers. They have the hardest job, yes. That was why all the theorizing about en-route airspace didn’t really help; delays almost always come back to runway problems.
I said the union has to be weakened because most en-route controllers should be replaced by computers, and the union won’t let that happen. The union wants to keep computers out of it and humans in it, and no amount of required rest periods (which, yes, I know about, and it’s pretty cheap of you to try to discredit me because I didn’t decide to talk about rest periods in a SINGLE PARAGRAPH’S discussion of ATC) will change the fact that computers are better than humans at keeping lots of vectors from intersecting.
On the other hand, most accidents and near-misses happen on and near the tarmac, and that’s not going to be computerized anytime soon. So I guess we can leave the union alone for now. And, in fact, we never got near the union anyway—when I left, the battle for automation was still between NASA and the FAA, and NASA has no power.
The TRACON is more like the enroute center than it is like the tower. Did you spend enough time with TMU to realize that the only ways to reduce delays due to volume is to increase the capacity of the airport or reduce the number of departures to the airport? BOS learned that lesson well, and they cut everybody’s delay time significantly by allowing a circling approach to a shorter runway (ceiling permitting). Not everybody can make the approach, but the ones that can are exempt from the delays because they don’t use a scarce resource. Again, keeping the vectors from intersecting is a tiny fraction of the job of controlling, even if that is what they mostly do. The big fraction is stuff like identifying the symptoms of hypoxia in pilots, talking the passenger of a heart attack victim onto a runway, and giving the pilot of an Airbus with severe birdstrike damage every option available. Humans scale very well when you can divide their responsibilities clearly; the workload of a controller in a sector that has N operations per hour scales roughly with N, regardless of the size of the sector within reasonable limits. A computer system that identified and suggested corrections and changes in the enroute (or even the TRACON) environment would probably improve safety, but it would only reduce delays if it was responsible for maintaining specified separation on final, and did a better job than humans do. There are a lot of failure modes in vectoring for a final. When you characterize controllers as being ambivalent about safety and acting to preserve their jobs, you are victim of and contributing to a false stereotype of unions and union employees. Everyone in professional aviation takes safety very seriously, and no silicon-based computer system currently in existence can respond to novel situations as well as a human can.
What I’ve learned after years of being a decent human being is that most scientists are regular people who don’t appreciate being insulted. I’m a scientist, my parents are scientists, my best friends, my teachers, my sister, my uncle, aunt, and two cousins are scientists.
That’s offensive, and the rest of the post comes off similarly.
In response to fubarobfusco: there is a major difference between “fuck you!” and “I’ve been personally offended by you. Here’s why.”
Whether something is offensive or not should be distinguished from whether or not it is true (or even whether or not it is relevant).
I should have said “That’s offensive and untrue, and the rest of the post comes off similarly.”
This is one of the very few cases where I feel the need to explain a downvote. The passage that you quoted and responded to was not worth the response. iDante’s comment could be summarized as “fuck you!” — an otherwise contentless exclamation of hostility — so it should be downvoted without response.
(I also think meta comments about the voting system should be downvoted; but I can’t downvote my own comment. Go ahead.)
I’m a human, my parents are humans, my best friends are humans, my teachers, my sister, my uncle, aunt, and cousins are humans.
Now, should I feel offended?
Or should I learn about my biases and try to become stronger?
No, because the claim on the front page is backed up by evidence. It’s not just pulled out of one person’s limited experiences. It IS offensive to negatively stereotype a group of people without evidence.
The author’s great “problems” of science are the same way. A broad generalization is made from limited experience, then no actual investigation is performed. Bold assertions are provided in place of careful statistics. The conclusion, “the biggest problem in science is management,” is utterly unconvincing.
Good article. I would suggest not leading off with the current first two points (Egos + No-men), as I think the extent to which each of these is true varies between labs / disciplines, whereas the other three points seem to be more or less universal throughout science.
(I have personally been lucky enough to never run into the Ego / No-men problem with any of my advisors; for a while I thought everyone who complained about these things was being melodramatic, but as some point a critical mass of highly reasonable people made such claims that I now believe them.)
I also think you overstate some of your claims, which may be suboptimal given that your points are already strong without the overstatements (for instance, the $60k number is high, even before you take into account the large amount of financial aid that universities provide; even with no financial aid, MIT only costs $55k/year, for instance, as far as I can tell).
Thanks. I was adding $45K tuition + $15K room and board and other expenses, which was my recollection, but I see MIT tuition is about $40K/yr.
Bostrom “Predictions from philosophy” makes similar advice, but not specific to scientists. In both cases the solution is focus cognitive resources on strategic analysis, I suppose. However, is really dificult to implement this on a large scale without hurting egos.
One question:
Do scientists know they need it? That’s a big question, so I’ll refine it:
Have you yet met any scientists who have expressed interest being more effectively managed?
I am a scientist who would like to be more effectively managed. Note, though, that I don’t think that telling me what research to do would be “effective management”. Management is a service, not a position of authority. And scientific problems are sufficiently technical that it’s very unlikely that a manager would have a better idea than me what problems were important to approach and why.
Of course, that doesn’t mean I shouldn’t have to justify myself on some level (and being forced to do that in the past has been very valuable to me), but at some point you might just have to trust my intuition. The best advisors I’ve worked with were extremely good at knowing when to go with my intuition (even if it conflicted with theirs) and when to force me to make things more explicit.
I imagine a good manager would basically take care of creating an optimal working environment, and force me to come back to the bigger picture when necessary.
There’s a section in my paper about the study of the (in)effectiveness of application-oriented research vs. basic research, and the trend over the past 50 years to shift almost completely to application-oriented research (where goals are ultimately set, or at least approved, by Congress) despite all the literature showing that it’s a waste of money. In my opinion, the problem isn’t that managers aren’t telling scientists what research to do; it’s that they are.
No. At least, they don’t express concern about the points in this post.
Selection effect. Interning in a big genetics lab and seeing some of these problems first hand is why I wasn’t a science major.
Then I wonder how one could propose to a scientist an offer of effective management and be warmly received. Hefty financial compensation, combined with a noble and ambitious goal, that would end—after years and years of research and development—in potentially great monetary and reputational profits, perhaps?
Search is very important and my experiences have been similar to what you described.
But I’ve often found myself wondering: what’s the solution?
I would love nothing more than to always be on top of current discoveries in my field. But it just seems impossible to do.
Google and similar tools are great but they only do part of the job. You have to know what to search for, and it is very easy to get caught up in your own biases and only search for ideas, keywords, etc. that you are most familiar with or currently most interested in, thereby easily missing many relevant results. This has happened to me often.
For example, I was once doing research on persistent random walks. Unfortunately, I searched very hard but could not find many results. Later on, I found that the mathematical model I was using was equivalent to the worm-like-chain (WLC) model in polymer physics. That discovery led me to discover a huge explosion of research and results quite relevant to my own work that I simply was not aware of before.
Semantic search technologies are helping but none of them have the ‘depth’ that true scientific literature searching requires. If you know of any tools that are of help in this regard, I would be greatly interested in them and I’m sure the community at large would be as well.
How did you find this out?
I wish you’d asked me this in 2012. I don’t quite remember the exact research process that led me to that connection. I had been searching for results relating to persistent random walks using terms like ‘persistence time’. But that won’t give you any WLC results. If you instead search for ‘persistence length’, you’ll get WLC results. The mathematics are identical; it’s just in one model it is a random walk through time and in another it is a random walk through space.
The TL;DR of why I stopped quit my university’s atheist club.
What do you think about David Brin’s “disputation arenas?”
Maybe we could get a group of scientists to try out some form of disputation arena (Delphi Method for example) and see if they can be more effectively managed that way?
Could you give some evidence for this assertion? My feeling from the “inside” is quite the reverse—at least in the States, NSF funding has remained stagnant, and grant allocation is widely considered a “black box” with high uncertainty in outcomes. I’ve visited three other countries in the past two years, and the feeling there seemed much the same.
So in short, scientists should do science for profit at startups, and they’ll become massively rich so this solution should enact itself.
Research usually isn’t profitable when compared to other investments. The long time between starting research and rolling out a product means that putting your money in stocks or bonds has a higher expected payoff. Also, most discoveries can’t be patented. Physics is inefficiently patent-free.
In other words, you shouldn’t do research?
If research isn’t profitable, there’s something seriously wrong with the way research is being done.
Meh. Lots of industries do fine without patents.
Not necessarily. In many fields there’s simply a very large gap between when an idea is discovered and when it is applied. Since most scientific research is either freely or cheaply available, and the applications happen many steps later in the process, scientific research functions as a public good, and thus has trouble deriving individual profit even as society benefits from it.
Many of those are industries with little innovation. Others rely on industry secrets. And others, like microchip manufacturing, rely on a constant arms race of new approaches so that even if one had the details of what someone else was doing, they will be already working on the next thing.
Research has a high payoff, but only a little of it is captured by the people who did the research. Research should be done, but it’s a tragedy of the commons.
Sounds more like an awesome of the commons. Aren’t scientists relevantly similar to OSS programmers?