I kind of like this idea. I’ve been thinking for a long time about how to make LessWrong more of a place of active learning. My guess is there is still a lot of hand-curating to be done to create the right type of active engagement, but I do think the barrier to entry is a bunch lower now and it might be worth the time-investment.
Since you kind of like it, let me spell out the way I use this to help with active engagement.
My software lets the user select the amount of material they view at a time. Usually, I use 1-3 sentences. When the user activates the quiz feature, I incorporate the material they’re viewing into a static engineered prompt that cues ChatGPT to produce a quiz question based on that material and prompts the user to reply. After the user replies, it’s sent back to ChatGPT to receive a grade.
I’m just using GPT-3.5 turbo, and it’s not especially accurate at grading. But I give the user the ability to view the original material that the question was based on to verify the grading accuracy. The point of the quiz is not so much accuracy as to get the user into an active reading mode as they go.
If you’re interested in having a conversation, I’d be happy to chat—I’m both interested in helping LessWrong improve, and also curious about the technical and logistical challenges you’re working with and the type of active engagement you’re hoping to promote.
I kind of like this idea. I’ve been thinking for a long time about how to make LessWrong more of a place of active learning. My guess is there is still a lot of hand-curating to be done to create the right type of active engagement, but I do think the barrier to entry is a bunch lower now and it might be worth the time-investment.
Since you kind of like it, let me spell out the way I use this to help with active engagement.
My software lets the user select the amount of material they view at a time. Usually, I use 1-3 sentences. When the user activates the quiz feature, I incorporate the material they’re viewing into a static engineered prompt that cues ChatGPT to produce a quiz question based on that material and prompts the user to reply. After the user replies, it’s sent back to ChatGPT to receive a grade.
I’m just using GPT-3.5 turbo, and it’s not especially accurate at grading. But I give the user the ability to view the original material that the question was based on to verify the grading accuracy. The point of the quiz is not so much accuracy as to get the user into an active reading mode as they go.
If you’re interested in having a conversation, I’d be happy to chat—I’m both interested in helping LessWrong improve, and also curious about the technical and logistical challenges you’re working with and the type of active engagement you’re hoping to promote.