or how many steps are left till we have an education platform for everyone
I’m trying to figure out how to build a universal education platform.
I don’t know how to do it.
By a ‘universal education platform’ I mean a system that allows anyone to learn anything and everything.
That’s a little ambitious.
So for argument’s sake, let’s drop some of the most obvious constraints and imagine our target student is healthy, literate, and can sit behind a computer for at least an hour a day. Let’s also say the system can teach 80% of people 95% of what they would be able to learn given a top notch personal tutor.
What we have then is a Learn Everything System (LES)[1].
How would it work and why don’t we have it?
My guess is that LES is an AI tutor controlling a rich digital simulation. By that I mean, it’s a game-based[2] learning experience orchestrated by your favorite teacher feeding you all of human knowledge.
It doesn’t exist cause neither the AI tutors nor the digital simulations are strong enough.
Yet.
So let’s build LES, though I’m not sure yet how.
That said, I think it’s worth looking at what it would take and what the steps in between would be. I suspect the crux is how to create that ideal AI tutor, cause the simulation part will likely solve itself along the way (we already have generative AI that looks like it’s playing Doom). And to that end, we need to understand a little bit more about how learning works.
A LES Model of Human Learning
Like any self-respecting researcher, I started my exploration of education with a deep dive into the literature.
Then I ran away screaming.
The field is so sprawling that I’m not sure a 4 year PhD would actually get me the insights I was hoping for. And that’s skipping the mortifying realization of how hard the field has been hit by the replication crisis[3]. So instead I built my own model of learning and asked researchers and entrepreneurs in the field if it made sense to them. Twelve conversations later, and this is where I ended up:
You can model learning as consisting of 6 factors—Content, Knowledge Representation, Navigation, Debugging, Emotional Regulation, and Consolidation.
Content is what you learn.
Knowledge Representation is how the content is encoded.
Navigation is how you find and traverse the content.
Debugging is how you remove the errors in how you process the content.
Emotional Regulation is how you keep bringing your attention back to the content.
Consolidation is the process of making the content stay available in your memory.
So what are we missing if we want to create the LES AI tutor?
LLM’s tick most of the boxes: They are trained on much of the internet (Content), can paraphrase the material till you can follow along (Knowledge Representation), are able to suggest an entry point on nearly any study topic and hold your hand throughout (Navigation), and will explain any problem you are stuck on (Debugging).
But.
They won’t help you keep your attention on task or stay motivated to learn (Emotional Regulation)[4].
Of course, you can ask it to do that. But for most people, by the time they notice their attention has drifted, it’s too late. And noticing is a hard skill in itself.
In contrast, imagine the best teacher you ever had as your personal tutor. They’ll subconsciously track your eye gaze and facial expressions, adjusting their words to your engagement. They’ll drum up examples that connect to your experience. They’ll even proactively offer just the right type of task to get you processing the content more deeply—an exercise, a break, or maybe even some extra reading.
You might wonder if teachers actually think they do this. I’ve asked, and the answer is mostly “no”. When I then probed what they thought made them a good teacher, the majority said “experience”. As far as I can tell “experience” is the stand-in term for “volume of training data for my subconscious processes in which I experiment with various approaches till I’ve hill-climbed to my local optimum in teaching performance”. Most can’t say what they do, why they do it, or pass on the process (obvious caveat: Some amazing teachers will of course be amazing teachers of teaching and teach all the teachers how to teach better. I suggest the backup ideal education system is to clone these particular teachers.)
Suffice it to say, introspection is hard. And devising scientific experiments that control for the myriad human factors that go in to teaching effectively is possibly even harder. So how about we skip all that and instead test our hypothesis by seeing if AI tutors get better based on what bits of teacher interactions they mimic. Thus I propose that the missing piece for the LES AI tutor is to train an AI model on video and audio material of world class tutors mentoring different types of students on various topics. The AI model will then learn how facial expressions, body language and non-linguistic speech markers relate to optimal prompts and interventions to keep the student focused and energized to learn.
So where are all the AI tutors now?
Well, Khan Academy has Khanmigo, DuoLingo has Lily, and Brainly tries to straight up be the homework tutor of your dreams.
Except, when you try them out, the question quickly presents itself: Why talk to them instead of Claude or ChatGPT?
One answer is that integration with the learning material is helpful—Khanmigo, Lily, and Brainly can directly access the material you are studying. That’s great for this transitional phase, but in two or three years, you might just get Claude or ChatGPT integrated in a Google Lens app, reading your screen, or watching your face through your camera.
Conclusion & Confusion
So what do we do now?
Well, a Learn Everything System (LES) run by an AI tutor that adapts the content to fully engage you in a game-based educational experience run as a rich simulation of all human knowledge seems to me to be the ideal form of learning—for probably almost everyone. But we are still missing some pieces, and the biggest of those is that an LLM would need to access the non-verbal component of human interaction so it can proactively keep a student engaged with the material.
On the other hand, we live in strange times and I’m not sure LES is possible before we develop AGI. Maybe we can create a subset of LES that is achievable today, without further progress in AI. Maybe the next right question to ask is what a lesser LES would look like. And maybe once we know that, we could—shall we say—turn that Less-on[5].
“les” means “lesson” in Dutch, my native language. It means “the” (plural) in French, which has a great all-the-things vibe. It means “them” in Spanish, which has a great all-the-people vibe.
“game-based” is distinctly different from “gamefied”! This deserves an essay in itself. But essentially, game-based learning is when you are playing for fun and you accidentally learn things without noticing. This happens to just about everyone who plays games, except most of it isn’t useful to them (“human transfer learning” is another essay I should write). In contrast, gamification is system designers reaching straight into your skull to pour dopamine into your exposed neural clefts.
For instance, last spring I went to the foremost education science conference in the Netherlands, ResearchED. They bring together researchers and educators to support and learn from each other. There I discovered two things:
Education is a massive coordination problem.
Good teachers know what works. Researchers don’t know why.
Case in point: there was a talk on “instructional scaffolding”, one of the seminal concepts in the field, by researchers from Utrecht University. Instructional scaffolding refers to adaptively adding and removing instructional support based on how quickly the student is progressing through the material. It was originally proposed by Wood, Bruner, & Ross in 1976. Google scholar shows over 18 thousand citations. Every pedagogical course under the sun recommends the practice. The original study had 32 participants and 1 instructor for all 4 conditions (different levels of scaffolding). The replication study had 285 participants, 8 instructors, and 4 conditions.
Much to the surprise of every teacher in the room, no effect size was found in the replication study. The paper isn’t published yet but during the presentation the researchers shared their methods: They had controlled for the exact level of scaffolding and wording, while filming every interaction so panel members could independently judge the quality of adherence to the research protocol.
They were as surprised as anyone that instructional scaffolding had no effect on student performance. Well, maybe not exactly as surprised as the teachers. The teachers were utterly baffled. Many spoke up to say that scaffolding worked amazingly well in their classes. How could this be? The researchers had no idea.
Technically LLMs currently also lack any way to offer you spaced repetition (Consolidation). However, this seems so trivially solvable that I’ve smoothly elided that part of the reasoning but somehow you are reading this footnote about it anyway.
Where is the Learn Everything System?
Link post
or how many steps are left till we have an education platform for everyone
I’m trying to figure out how to build a universal education platform.
I don’t know how to do it.
By a ‘universal education platform’ I mean a system that allows anyone to learn anything and everything.
That’s a little ambitious.
So for argument’s sake, let’s drop some of the most obvious constraints and imagine our target student is healthy, literate, and can sit behind a computer for at least an hour a day. Let’s also say the system can teach 80% of people 95% of what they would be able to learn given a top notch personal tutor.
What we have then is a Learn Everything System (LES)[1].
How would it work and why don’t we have it?
My guess is that LES is an AI tutor controlling a rich digital simulation. By that I mean, it’s a game-based[2] learning experience orchestrated by your favorite teacher feeding you all of human knowledge.
It doesn’t exist cause neither the AI tutors nor the digital simulations are strong enough.
Yet.
So let’s build LES, though I’m not sure yet how.
That said, I think it’s worth looking at what it would take and what the steps in between would be. I suspect the crux is how to create that ideal AI tutor, cause the simulation part will likely solve itself along the way (we already have generative AI that looks like it’s playing Doom). And to that end, we need to understand a little bit more about how learning works.
A LES Model of Human Learning
Like any self-respecting researcher, I started my exploration of education with a deep dive into the literature.
Then I ran away screaming.
The field is so sprawling that I’m not sure a 4 year PhD would actually get me the insights I was hoping for. And that’s skipping the mortifying realization of how hard the field has been hit by the replication crisis[3]. So instead I built my own model of learning and asked researchers and entrepreneurs in the field if it made sense to them. Twelve conversations later, and this is where I ended up:
You can model learning as consisting of 6 factors—Content, Knowledge Representation, Navigation, Debugging, Emotional Regulation, and Consolidation.
Content is what you learn.
Knowledge Representation is how the content is encoded.
Navigation is how you find and traverse the content.
Debugging is how you remove the errors in how you process the content.
Emotional Regulation is how you keep bringing your attention back to the content.
Consolidation is the process of making the content stay available in your memory.
So what are we missing if we want to create the LES AI tutor?
LLM’s tick most of the boxes: They are trained on much of the internet (Content), can paraphrase the material till you can follow along (Knowledge Representation), are able to suggest an entry point on nearly any study topic and hold your hand throughout (Navigation), and will explain any problem you are stuck on (Debugging).
But.
They won’t help you keep your attention on task or stay motivated to learn (Emotional Regulation)[4].
Of course, you can ask it to do that. But for most people, by the time they notice their attention has drifted, it’s too late. And noticing is a hard skill in itself.
In contrast, imagine the best teacher you ever had as your personal tutor. They’ll subconsciously track your eye gaze and facial expressions, adjusting their words to your engagement. They’ll drum up examples that connect to your experience. They’ll even proactively offer just the right type of task to get you processing the content more deeply—an exercise, a break, or maybe even some extra reading.
You might wonder if teachers actually think they do this. I’ve asked, and the answer is mostly “no”. When I then probed what they thought made them a good teacher, the majority said “experience”. As far as I can tell “experience” is the stand-in term for “volume of training data for my subconscious processes in which I experiment with various approaches till I’ve hill-climbed to my local optimum in teaching performance”. Most can’t say what they do, why they do it, or pass on the process (obvious caveat: Some amazing teachers will of course be amazing teachers of teaching and teach all the teachers how to teach better. I suggest the backup ideal education system is to clone these particular teachers.)
Suffice it to say, introspection is hard. And devising scientific experiments that control for the myriad human factors that go in to teaching effectively is possibly even harder. So how about we skip all that and instead test our hypothesis by seeing if AI tutors get better based on what bits of teacher interactions they mimic. Thus I propose that the missing piece for the LES AI tutor is to train an AI model on video and audio material of world class tutors mentoring different types of students on various topics. The AI model will then learn how facial expressions, body language and non-linguistic speech markers relate to optimal prompts and interventions to keep the student focused and energized to learn.
So where are all the AI tutors now?
Well, Khan Academy has Khanmigo, DuoLingo has Lily, and Brainly tries to straight up be the homework tutor of your dreams.
Except, when you try them out, the question quickly presents itself: Why talk to them instead of Claude or ChatGPT?
One answer is that integration with the learning material is helpful—Khanmigo, Lily, and Brainly can directly access the material you are studying. That’s great for this transitional phase, but in two or three years, you might just get Claude or ChatGPT integrated in a Google Lens app, reading your screen, or watching your face through your camera.
Conclusion & Confusion
So what do we do now?
Well, a Learn Everything System (LES) run by an AI tutor that adapts the content to fully engage you in a game-based educational experience run as a rich simulation of all human knowledge seems to me to be the ideal form of learning—for probably almost everyone. But we are still missing some pieces, and the biggest of those is that an LLM would need to access the non-verbal component of human interaction so it can proactively keep a student engaged with the material.
On the other hand, we live in strange times and I’m not sure LES is possible before we develop AGI. Maybe we can create a subset of LES that is achievable today, without further progress in AI. Maybe the next right question to ask is what a lesser LES would look like. And maybe once we know that, we could—shall we say—turn that Less-on[5].
“les” means “lesson” in Dutch, my native language. It means “the” (plural) in French, which has a great all-the-things vibe. It means “them” in Spanish, which has a great all-the-people vibe.
“game-based” is distinctly different from “gamefied”! This deserves an essay in itself. But essentially, game-based learning is when you are playing for fun and you accidentally learn things without noticing. This happens to just about everyone who plays games, except most of it isn’t useful to them (“human transfer learning” is another essay I should write). In contrast, gamification is system designers reaching straight into your skull to pour dopamine into your exposed neural clefts.
For instance, last spring I went to the foremost education science conference in the Netherlands, ResearchED. They bring together researchers and educators to support and learn from each other. There I discovered two things:
Education is a massive coordination problem.
Good teachers know what works. Researchers don’t know why.
Case in point: there was a talk on “instructional scaffolding”, one of the seminal concepts in the field, by researchers from Utrecht University. Instructional scaffolding refers to adaptively adding and removing instructional support based on how quickly the student is progressing through the material. It was originally proposed by Wood, Bruner, & Ross in 1976. Google scholar shows over 18 thousand citations. Every pedagogical course under the sun recommends the practice. The original study had 32 participants and 1 instructor for all 4 conditions (different levels of scaffolding). The replication study had 285 participants, 8 instructors, and 4 conditions.
Much to the surprise of every teacher in the room, no effect size was found in the replication study. The paper isn’t published yet but during the presentation the researchers shared their methods: They had controlled for the exact level of scaffolding and wording, while filming every interaction so panel members could independently judge the quality of adherence to the research protocol.
They were as surprised as anyone that instructional scaffolding had no effect on student performance. Well, maybe not exactly as surprised as the teachers. The teachers were utterly baffled. Many spoke up to say that scaffolding worked amazingly well in their classes. How could this be? The researchers had no idea.
Technically LLMs currently also lack any way to offer you spaced repetition (Consolidation). However, this seems so trivially solvable that I’ve smoothly elided that part of the reasoning but somehow you are reading this footnote about it anyway.
Some say this entire essay was written as a lead up to this joke.