During my last few years working as an AI researcher, I increasingly came to appreciate the distinction between what makes science successful and what makes scientists successful. Science works because it has distinct standards for what types of evidence it accepts, with empirical data strongly prioritised. But scientists spend a lot of their time following hunches which they may not even be able to articulate clearly, let alone in rigorous scientific terms—and throughout the history of science, this has often paid off. In other words, the types of evidence which are most useful in choosing which hypotheses to prioritise can differ greatly from the types of evidence which are typically associated with science. In particular, I’ll highlight two ways in which this happens.
First is scientists thinking in terms of concepts which fall outside the dominant paradigm of their science. That might be because those concepts are too broad, or too philosophical, or too interdisciplinary. For example, machine learning researchers are often inspired by analogies to evolution, or beliefs about human cognition, or issues in philosophy of language—which are all very hard to explore deeply in a conventional machine learning paper! Often such ideas are mentioned briefly in papers, perhaps in the motivation section—but there’s not the freedom to analyse them with the level of detail and rigour that is required for making progress on tricky conceptual questions.
Secondly, scientists often have strong visions for what their field could achieve, and long-term aspirations for their research. These ideas may make a big difference to what subfields or problems those researchers focus on. In the case of AI, some researchers aim to automate a wide range of tasks, or to understand intelligence, or to build safe AGI. Again, though, these aren’t ideas which the institutions and processes of the field of AI are able to thoroughly discuss and evaluate—instead, they are shared and developed primarily in informal ways.
Now, I’m not advocating for these ideas to be treated the same as existing scientific research—I think norms about empiricism are very important to science’s success. But the current situation is far from ideal. As one example, Rich Sutton’s essay on the bitter lesson in AI was published on his blog, and then sparked a fragmented discussion on other blogs and personal facebook walls. Yet in my opinion this argument about AI, which draws on his many decades of experience in the field, is one of the most crucial ideas for the field to understand and evaluate properly. So I think we need venues for such discussions to occur in parallel with the process of doing research that conforms to standard publication norms.
One key reason I’m currently doing a PhD in philosophy is because I hope that philosophy of science can provide one such venue for addressing important questions which can’t be explored very well within scientific fields themselves. To be clear, I’m not claiming that this is the main focus of philosophy of science—there are many philosophical research questions which, to me and most scientists, seem misguided or confused. But the remit of philosophy of science is broad enough to allow investigations of a wide range of issues, while also rewarding thorough and rigorous analysis. So I’m excited about the field’s potential to bring clarity and insight to the high-level questions scientists are most curious about, especially in AI. Even if this doesn’t allow us to resolve those questions directly, I think it will at least help to tease out different conceptual possibilities, and thereby make an important contribution to scientific—and human—progress.
Why philosophy of science?
Link post
First is scientists thinking in terms of concepts which fall outside the dominant paradigm of their science. That might be because those concepts are too broad, or too philosophical, or too interdisciplinary. For example, machine learning researchers are often inspired by analogies to evolution, or beliefs about human cognition, or issues in philosophy of language—which are all very hard to explore deeply in a conventional machine learning paper! Often such ideas are mentioned briefly in papers, perhaps in the motivation section—but there’s not the freedom to analyse them with the level of detail and rigour that is required for making progress on tricky conceptual questions.
Secondly, scientists often have strong visions for what their field could achieve, and long-term aspirations for their research. These ideas may make a big difference to what subfields or problems those researchers focus on. In the case of AI, some researchers aim to automate a wide range of tasks, or to understand intelligence, or to build safe AGI. Again, though, these aren’t ideas which the institutions and processes of the field of AI are able to thoroughly discuss and evaluate—instead, they are shared and developed primarily in informal ways.
Now, I’m not advocating for these ideas to be treated the same as existing scientific research—I think norms about empiricism are very important to science’s success. But the current situation is far from ideal. As one example, Rich Sutton’s essay on the bitter lesson in AI was published on his blog, and then sparked a fragmented discussion on other blogs and personal facebook walls. Yet in my opinion this argument about AI, which draws on his many decades of experience in the field, is one of the most crucial ideas for the field to understand and evaluate properly. So I think we need venues for such discussions to occur in parallel with the process of doing research that conforms to standard publication norms.
One key reason I’m currently doing a PhD in philosophy is because I hope that philosophy of science can provide one such venue for addressing important questions which can’t be explored very well within scientific fields themselves. To be clear, I’m not claiming that this is the main focus of philosophy of science—there are many philosophical research questions which, to me and most scientists, seem misguided or confused. But the remit of philosophy of science is broad enough to allow investigations of a wide range of issues, while also rewarding thorough and rigorous analysis. So I’m excited about the field’s potential to bring clarity and insight to the high-level questions scientists are most curious about, especially in AI. Even if this doesn’t allow us to resolve those questions directly, I think it will at least help to tease out different conceptual possibilities, and thereby make an important contribution to scientific—and human—progress.