Messy Science

Sometimes it’s obvious who the good scientists are. They’re the ones who have the Nobel Prize, or the Fields Medal. They’re the ones with named professorships. But sometimes it’s not obvious—at least, not obvious to me.

In young, interdisciplinary fields (I’m most familiar with certain parts of applied math) there are truly different approaches. So trying to decide between approaches is at least partly tied to whether you think something is, say, really a biology problem, a computer science problem, or a mathematics problem. (And that’s influenced by your educational background.) There are issues of taste: some people prefer general, elegant solutions, while some people think it’s more useful to have a precise model geared to a specific problem. There are issues of goals: do we want to build a tool that can be brought to market, do we want to prove a theorem, or do we want to model what a biological brain does? And there’s always tension between making assumptions about the data that allow you to do prettier math, versus permitting more “nastiness” and obtaining more modest results.

There’s a lot of debate, and it’s hard for a novice to make comparisons; usually the only thing we can do is grab the coattails of someone who has proven expertise in an older, more traditional field. That’s useful for becoming a scientist, but the downside is that you don’t necessarily get a complete picture (as I get my education in math, I’m going to be more inclined to believe that the electrical engineers are doing it all wrong, even though the driving *reason* for that belief is that I didn’t want to be an electrical engineer when I was 18.)

I’m hankering for some kind of meta-science that tells you how to judge between different avenues of research when they’re actually different. (It’s much easier to say “Lab A used sounder methodology than Lab B,” or “Proof A is more general and provides a stronger result than Proof B.”) Maybe it’s silly on my part—maybe it’s asking to compare the incomparable. But it strikes me as relevant to the LW community—especially when I see comments to the effect that such-and-such approach to AI is a dead end, not going to succeed, written as though the reason why should be obvious. I don’t know about AI, but it does seem that correctly predicting which research approaches are “dead ends” is a hard problem, and it’s relevant to think about how we do it. What’s your methodology for deciding what’s worth pursuing?

(Earlier I wrote an article called “What is Bunk?” in which I tried to understand how we identify pseudoscience. This is roughly the same question, but at a much higher level, when the subjects of comparison are all professional scientists writing in peer-reviewed journals.)