I’m a lot less excited about the literature of the world’s philosophy than I am about the living students of it.
Of course, there are some choices in designing an AI that are ethical choices, for which there’s no standard by which one culture’s choice is better than another’s. In this case, incorporating diverse perspectives is “merely” a fair way to choose how to steer the future—a thing to do because we want to, not because it solves some technical problem.
But there are also philosophical problems faced in the construction of AI that are technical problems, and I think the philosophy literature is just not going to contain a solution to these problems, because they require highly specific solutions that you’re not going to think of if you’re not even aware of the problem. You bring up ontological shifts, and I think the Madhyamaka Buddhist sutra you quote is a typical example—it’s interesting as a human, especially with the creativity in interpretation afforded to us by hindsight, but the criteria for “interesting as a human” are so much fewer and more lenient than what’s necessary to design a goal system that responds capably to ontological shifts.
The Anglo-American tradition of philosophy is in no way superior to Buddhist philosophy on this score. What is really necessary is “bespoke” philosophy oriented to the problems at hand in AI alignment. This philosophy is going to superficially sound more like analytic philosophy than, say, continental philosophy or vedic philosophy, just because of what we need it to do, but that doesn’t mean it can’t benefit from a diversity of viewpoints and mental toolboxes.
I’m a lot less excited about the literature of the world’s philosophy than I am about the living students of it.
Of course, there are some choices in designing an AI that are ethical choices, for which there’s no standard by which one culture’s choice is better than another’s. In this case, incorporating diverse perspectives is “merely” a fair way to choose how to steer the future—a thing to do because we want to, not because it solves some technical problem.
But there are also philosophical problems faced in the construction of AI that are technical problems, and I think the philosophy literature is just not going to contain a solution to these problems, because they require highly specific solutions that you’re not going to think of if you’re not even aware of the problem. You bring up ontological shifts, and I think the Madhyamaka Buddhist sutra you quote is a typical example—it’s interesting as a human, especially with the creativity in interpretation afforded to us by hindsight, but the criteria for “interesting as a human” are so much fewer and more lenient than what’s necessary to design a goal system that responds capably to ontological shifts.
The Anglo-American tradition of philosophy is in no way superior to Buddhist philosophy on this score. What is really necessary is “bespoke” philosophy oriented to the problems at hand in AI alignment. This philosophy is going to superficially sound more like analytic philosophy than, say, continental philosophy or vedic philosophy, just because of what we need it to do, but that doesn’t mean it can’t benefit from a diversity of viewpoints and mental toolboxes.