If we can train AI to be wise, it would imply an ability to automate training, because if we can train a wise AI, then in theory that AI could train other AIs to be wise in the same way wise humans are able to train other humans to be wise. We would only need to train a single wise AI in such a scheme who could pass on wisdom to other AIs.
I think this is way too optimistic. Having trained a wise person or AI once does not mean that we have fully understood what we have done to get there, which limits our ability to reproduce it. One can maybe make the argument that in the context of fully reproducible AI training pipelines recreation may be possible or that a wise AI could be copied but we shouldn’t simply assume this. The world is super complex and always in motion. Nothing is permanent. What has worked in one context may not always work in an other context. Agents which were considered wise at some point may not be at another or agents which have actually been wise in hindsight may not be recognized as such at the time.
In addition, producing one wise AI does not necessarily imply that this wise AI can effectively pass on wisdom at the required scale. It may have a better chance than non-wise AIs but we shouldn’t take success as a given, if all we have managed is to produce one wise AI. There are many forces at play here that could subvert or overcome such efforts, in particular in race situations.
My gut feeling is that transmission of wisdom is somewhat of a coordination game that depends on enclaves of relatively wise minds cross checking, challenging, and supporting each other (i.e., Thich Nhat Hanh’s “the next Buddha will be a Sangha”). Following this line of logic, the unit of analysis should be the collective or even ecology of minds and practices rather than the “single” wise AI. I acknowledge that this is more of an epistemic rather than ontological distinction (e.g., one could also think of a complex mind as a collective as in IFS) but I think it’s key to unpack the structure of wisdom and how it comes about rather than thinking of it as “simply” a nebulous trait that can and needs to be copied.
This is a place where my Zen bias is showing through. When I wrote this I was implicitly thinking about the way we have a system of dharma transmission that, at least as we practice Zen in the west, also grants teaching authorization, so my assumption was that if we feel confident certifying an AI as wise, this would imply also believing it to be wise and skilled enough to teach what it knows. But you’re right, these two aspects, wisdom and teaching skill, can be separated, and in fact in Japan this is the case: dharma transmission generally comes years before teaching certification is granted, and many more people receive transmission than are granted the right to teach.
I think this is way too optimistic. Having trained a wise person or AI once does not mean that we have fully understood what we have done to get there, which limits our ability to reproduce it. One can maybe make the argument that in the context of fully reproducible AI training pipelines recreation may be possible or that a wise AI could be copied but we shouldn’t simply assume this. The world is super complex and always in motion. Nothing is permanent. What has worked in one context may not always work in an other context. Agents which were considered wise at some point may not be at another or agents which have actually been wise in hindsight may not be recognized as such at the time.
In addition, producing one wise AI does not necessarily imply that this wise AI can effectively pass on wisdom at the required scale. It may have a better chance than non-wise AIs but we shouldn’t take success as a given, if all we have managed is to produce one wise AI. There are many forces at play here that could subvert or overcome such efforts, in particular in race situations.
My gut feeling is that transmission of wisdom is somewhat of a coordination game that depends on enclaves of relatively wise minds cross checking, challenging, and supporting each other (i.e., Thich Nhat Hanh’s “the next Buddha will be a Sangha”). Following this line of logic, the unit of analysis should be the collective or even ecology of minds and practices rather than the “single” wise AI. I acknowledge that this is more of an epistemic rather than ontological distinction (e.g., one could also think of a complex mind as a collective as in IFS) but I think it’s key to unpack the structure of wisdom and how it comes about rather than thinking of it as “simply” a nebulous trait that can and needs to be copied.
This is a place where my Zen bias is showing through. When I wrote this I was implicitly thinking about the way we have a system of dharma transmission that, at least as we practice Zen in the west, also grants teaching authorization, so my assumption was that if we feel confident certifying an AI as wise, this would imply also believing it to be wise and skilled enough to teach what it knows. But you’re right, these two aspects, wisdom and teaching skill, can be separated, and in fact in Japan this is the case: dharma transmission generally comes years before teaching certification is granted, and many more people receive transmission than are granted the right to teach.