I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.
I find this debate a bit strange. Academic philosophy has its problems, but it’s also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in order to be able to say anything interesting about AI. On the other hand, I don’t quite agree that we should just re-invent the wheel completely and then look to the literature to find “philosophical nearest neighbor”. Imagine suggesting we do that with math. “Who cares about what all these mathematicians have written, just invent your own mathematical concepts from scratch and then look to find the nearest neighbor in the mathematical literature.” You could do that, but you’d be wasting a huge amount of time and energy re-discovering things that are already well understood in the appropriate field of study. I routinely find myself reading pseudo-philosophical debates among science/engineering types and thinking to myself, I wish they had read philosopher X on that topic so that their thinking would be clearer.
It seems that here on LW many people have a definition of “rationalist” that amounts to endorsing a specific set of philosophical positions or meta-theories (e.g., naturalism, Bayesianism, logical empiricism, reductionism, etc). In contrast, I think that the study of philosophy shows another way of understanding what it is to be a rational inquirer. It involves a sensitivity to reason and argument, a willingness to question one’s cherished assumptions, a willingness to be generous with one’s intellectual interlocutors. In other words, being rational means following a set of tacit norms for inquiry and dialogue rather than holding a specific set of beliefs or theories.
In this sense of reason does not involve a commitment to any specific meta-theory. Plato’s theory of the forms, however implausible it seems to us today, is just as much an expression of rationalism in the philosophical sense. It was a good-faith effort to try to make sense of reality according to best arguments and evidence of his day. For me, the greatest value of studying philosophy is that it teaches rational inquiry as a way of life. It shows us that all these different weird theories can be compatible with a shared commitment to reason as the path to truth.
Unfortunately, this shared commitment does break down in some places in the 19th and 20th centuries. With certain continental “philosophers” like Nietzsche, Derrida and Foucault their writing undermines the commitment to rational inquiry itself, and ends up being a lot of posturing and rhetoric. However, even on the continental side there are some philosophers who are committed to rational inquiry (my favourite being Merleau-Ponty who pioneered ideas of grounded intelligence that inspired certain approaches in RL research today).
I think it’s also worth noting that Nick Bostrum who helped found the field of AI safety is a straight-up Oxford trained analytic philosopher. In my Master’s program, I attended a talk he gave on Utilitarianism at Oxford back in 2007 before he was well known for AI related stuff.
Another philosopher who I think should get more attention in the AI-alignment discussion is Harry Frankfurt. He wrote brilliantly on value-alignment problem for humans (i.e., how do we ourselves align conflicting desires, values, interests, etc.).
I find reading this post and the ensuing discussion quite interesting because I studied academic philosophy (both analytic and continental) for about 12 years at university. Then I changed course and moved into programming and math, and developed a strong interest thinking about AI safety.
I find this debate a bit strange. Academic philosophy has its problems, but it’s also a massive treasure trove of interesting ideas and rigorous arguments. I can understand the feeling of not wanting to get bogged down in the endless minutia of academic philosophizing in order to be able to say anything interesting about AI. On the other hand, I don’t quite agree that we should just re-invent the wheel completely and then look to the literature to find “philosophical nearest neighbor”. Imagine suggesting we do that with math. “Who cares about what all these mathematicians have written, just invent your own mathematical concepts from scratch and then look to find the nearest neighbor in the mathematical literature.” You could do that, but you’d be wasting a huge amount of time and energy re-discovering things that are already well understood in the appropriate field of study. I routinely find myself reading pseudo-philosophical debates among science/engineering types and thinking to myself, I wish they had read philosopher X on that topic so that their thinking would be clearer.
It seems that here on LW many people have a definition of “rationalist” that amounts to endorsing a specific set of philosophical positions or meta-theories (e.g., naturalism, Bayesianism, logical empiricism, reductionism, etc). In contrast, I think that the study of philosophy shows another way of understanding what it is to be a rational inquirer. It involves a sensitivity to reason and argument, a willingness to question one’s cherished assumptions, a willingness to be generous with one’s intellectual interlocutors. In other words, being rational means following a set of tacit norms for inquiry and dialogue rather than holding a specific set of beliefs or theories.
In this sense of reason does not involve a commitment to any specific meta-theory. Plato’s theory of the forms, however implausible it seems to us today, is just as much an expression of rationalism in the philosophical sense. It was a good-faith effort to try to make sense of reality according to best arguments and evidence of his day. For me, the greatest value of studying philosophy is that it teaches rational inquiry as a way of life. It shows us that all these different weird theories can be compatible with a shared commitment to reason as the path to truth.
Unfortunately, this shared commitment does break down in some places in the 19th and 20th centuries. With certain continental “philosophers” like Nietzsche, Derrida and Foucault their writing undermines the commitment to rational inquiry itself, and ends up being a lot of posturing and rhetoric. However, even on the continental side there are some philosophers who are committed to rational inquiry (my favourite being Merleau-Ponty who pioneered ideas of grounded intelligence that inspired certain approaches in RL research today).
I think it’s also worth noting that Nick Bostrum who helped found the field of AI safety is a straight-up Oxford trained analytic philosopher. In my Master’s program, I attended a talk he gave on Utilitarianism at Oxford back in 2007 before he was well known for AI related stuff.
Another philosopher who I think should get more attention in the AI-alignment discussion is Harry Frankfurt. He wrote brilliantly on value-alignment problem for humans (i.e., how do we ourselves align conflicting desires, values, interests, etc.).