Why is there virtually nobody else interested in metaphilosophy or ensuring AI philosophical competence (or that of future civilization as a whole)
I interpret your perspective on AI as combining several things: believing that superhuman AI is coming; believing that it can turn out very bad or very good, and that a good outcome is a matter of correct design; believing that the inclinations of the first superhuman AI(s) will set the rules for the remaining future of civilization.
This is a very distinctive combination of beliefs. At one time, I think Less Wrong was the only intellectual community in which that combination was commonplace. I guess that it then later spread to parts of the Effective Altruism and AI safety communities, once they existed.
Your specific take is then that correct philosophical cognition may be essential, because decision theory, and normativity in general, is one of the things that AI alignment has to get right, and the best thinking there came from philosophy.
I suspect that the immediate answer to your question, is that this specific line of thought would only occur to people who share those three presuppositions—those “priors”, if you like—and that was always a small group of people, busy with a very multifaceted problem.
And furthermore, if someone from that group did try to identify the kind of thinking by the AI, that needs to be correct for a good outcome, they wouldn’t necessarily identify it as “philosophical thinking”—especially since many such people would disdain what is actually done in philosophy. They might prefer cognitive labels like metacognition, concept formation, or theory formation, or they might even think in terms of the concepts and vocabulary of computer programming.
One way to get perspective on this, is to see if someone else managed to independently invent this line of thought, but under a different name, or even in a different concept. Here’s something ironic: it occurred to me to wonder, if anyone asked this question, during the advent of psychoanalysis. Someone might have thought, psychoanalysis has the power to shape minds, it could determine the future of the human race, we’d better make sure that psychoanalysts have the right philosophy. If you look for discussions of psychoanalysis and metaphilosophy, I don’t think you’ll find that exact concern, but you will find that the first recorded use of the term “metaphilosophy” was by a psychoanalyst, Morris Lazerowitz. However, he was psychoanalyzing the preoccupations of philosophers, rather than sophoanalyzing the presuppositions of psychoanalysts.
Another person I checked was Jurgen Schmidhuber, the AI pioneer. I found a 2012 paper by him, telling “philosophers and futurists [to] catch up” with new computer-science definitions of intelligence, problem-solving, and creativity—many of them due to him. This is an example of someone in the AI camp who went seeking cognitive fundamentals too, but who came to regard something computational (in Schmidhuber’s case, data compression), rather than “philosophy”, as the wellspring of cognitive progress. (Incidentally, Schmidhuber’s attitude to the future of morality is relativism tempered by darwinism—there will be multiple AI value systems, and the “survivors” will determine what is regarded as moral.)
On the other hand, I belong to a camp that arrives at the importance of philosophical cognition, owing to concerns about inadequate philosophy in the community, and its consequences for scientific ontology and AI consciousness. I wrote an essay here a decade ago, “Friendly AI and the limits of computational epistemology”, arguing that physicalism (as well as more esoteric ontologies like mathematical platonism and computational platonism) is incomplete, but that the favored epistemologies, here and in adjacent communities, are formally incapable of noticing this, and that these ontological and epistemological presuppositions might be built into the AIs.
As it turns out, an even more pragmatist and positivist approach to AI, deep learning, won out, and as a result we now have AI colleagues that can talk to us, who have a superhuman speed and breadth of knowledge, but whose inner workings we don’t even understand. It remains to be seen whether the good that their polymathy can do, outweighs the bad that their inscrutability portends, for the future of AI alignment.
I interpret your perspective on AI as combining several things: believing that superhuman AI is coming; believing that it can turn out very bad or very good, and that a good outcome is a matter of correct design; believing that the inclinations of the first superhuman AI(s) will set the rules for the remaining future of civilization.
This is a very distinctive combination of beliefs. At one time, I think Less Wrong was the only intellectual community in which that combination was commonplace. I guess that it then later spread to parts of the Effective Altruism and AI safety communities, once they existed.
Your specific take is then that correct philosophical cognition may be essential, because decision theory, and normativity in general, is one of the things that AI alignment has to get right, and the best thinking there came from philosophy.
I suspect that the immediate answer to your question, is that this specific line of thought would only occur to people who share those three presuppositions—those “priors”, if you like—and that was always a small group of people, busy with a very multifaceted problem.
And furthermore, if someone from that group did try to identify the kind of thinking by the AI, that needs to be correct for a good outcome, they wouldn’t necessarily identify it as “philosophical thinking”—especially since many such people would disdain what is actually done in philosophy. They might prefer cognitive labels like metacognition, concept formation, or theory formation, or they might even think in terms of the concepts and vocabulary of computer programming.
One way to get perspective on this, is to see if someone else managed to independently invent this line of thought, but under a different name, or even in a different concept. Here’s something ironic: it occurred to me to wonder, if anyone asked this question, during the advent of psychoanalysis. Someone might have thought, psychoanalysis has the power to shape minds, it could determine the future of the human race, we’d better make sure that psychoanalysts have the right philosophy. If you look for discussions of psychoanalysis and metaphilosophy, I don’t think you’ll find that exact concern, but you will find that the first recorded use of the term “metaphilosophy” was by a psychoanalyst, Morris Lazerowitz. However, he was psychoanalyzing the preoccupations of philosophers, rather than sophoanalyzing the presuppositions of psychoanalysts.
Another person I checked was Jurgen Schmidhuber, the AI pioneer. I found a 2012 paper by him, telling “philosophers and futurists [to] catch up” with new computer-science definitions of intelligence, problem-solving, and creativity—many of them due to him. This is an example of someone in the AI camp who went seeking cognitive fundamentals too, but who came to regard something computational (in Schmidhuber’s case, data compression), rather than “philosophy”, as the wellspring of cognitive progress. (Incidentally, Schmidhuber’s attitude to the future of morality is relativism tempered by darwinism—there will be multiple AI value systems, and the “survivors” will determine what is regarded as moral.)
On the other hand, I belong to a camp that arrives at the importance of philosophical cognition, owing to concerns about inadequate philosophy in the community, and its consequences for scientific ontology and AI consciousness. I wrote an essay here a decade ago, “Friendly AI and the limits of computational epistemology”, arguing that physicalism (as well as more esoteric ontologies like mathematical platonism and computational platonism) is incomplete, but that the favored epistemologies, here and in adjacent communities, are formally incapable of noticing this, and that these ontological and epistemological presuppositions might be built into the AIs.
As it turns out, an even more pragmatist and positivist approach to AI, deep learning, won out, and as a result we now have AI colleagues that can talk to us, who have a superhuman speed and breadth of knowledge, but whose inner workings we don’t even understand. It remains to be seen whether the good that their polymathy can do, outweighs the bad that their inscrutability portends, for the future of AI alignment.