Ah, I see! My immediate instinct is to say “okay, design a narrow AI to play the role of a teacher” but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that’s really not the point lol.
There is something to be said for interactivity though. In my experience, the best explanations I’ve seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively.
Also, something like a market for explanations might be desirable. What you’d need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)
Ah, I see! My immediate instinct is to say “okay, design a narrow AI to play the role of a teacher” but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that’s really not the point lol.
There is something to be said for interactivity though. In my experience, the best explanations I’ve seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively.
Also, something like a market for explanations might be desirable. What you’d need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)