If you want to use LLM for a tutor, I think that is doable in theory, but you can’t just talk to ChatGPT and expect effective tutoring to happen. The problem is that an LLM can be anything, simulate any kind of a human, but you want it to simulate one very specific kind of a human—a good tutor. So at the very least, you need to provide a prompt that will turn the LLM into that specific kind of intelligence. As opposed to the alternatives.
Content—the same objection: the LLM knows everything, but it also knows all the misconceptions, crackpot ideas, conspiracy theories, etc. So in each lesson we should nudge it in the right direction: provide a list of facts, and a prompt that says to follow the list.
Navigation—provide a recommended outline. Unless the student wants to focus on something else, the LLM should follow a predetermined path.
Debugging—LLM should test student’s understanding very often. We could provide a list of common mistakes to watch out for. Also, we could provide specific questions that the student has to answer correctly, and tell the LLM to ask them at a convenient moment.
Consolidation—the LLM should be connected to some kind of space repetition system. Maybe the space repetition system would provide the list of things that the student should review today, and the LLM could chose the right way to ask about them, and provide a feedback to the space repetition system.
tl;dr—the LLM should follow a human-made (or at least human-approved) curriculum, and cooperate with a space repetition system.
If you want to use LLM for a tutor, I think that is doable in theory, but you can’t just talk to ChatGPT and expect effective tutoring to happen. The problem is that an LLM can be anything, simulate any kind of a human, but you want it to simulate one very specific kind of a human—a good tutor. So at the very least, you need to provide a prompt that will turn the LLM into that specific kind of intelligence. As opposed to the alternatives.
Content—the same objection: the LLM knows everything, but it also knows all the misconceptions, crackpot ideas, conspiracy theories, etc. So in each lesson we should nudge it in the right direction: provide a list of facts, and a prompt that says to follow the list.
Navigation—provide a recommended outline. Unless the student wants to focus on something else, the LLM should follow a predetermined path.
Debugging—LLM should test student’s understanding very often. We could provide a list of common mistakes to watch out for. Also, we could provide specific questions that the student has to answer correctly, and tell the LLM to ask them at a convenient moment.
Consolidation—the LLM should be connected to some kind of space repetition system. Maybe the space repetition system would provide the list of things that the student should review today, and the LLM could chose the right way to ask about them, and provide a feedback to the space repetition system.
tl;dr—the LLM should follow a human-made (or at least human-approved) curriculum, and cooperate with a space repetition system.