What is your opinion on progressing from easier to more difficult tasks?
This is a part of what the school environment is supposed to do—bring you a set of very simple exercises, and after you complete them, bring you another set of slightly more difficult exercises. Keeping you in the zone of proximal development, progressing one inferential step at a time, hypothetically until the true mastery.
Seems like we should separate whether schools actually fail at achieving this goal, or whether this very goal is mistaken.
When we observe actual masters solve actual tasks, the problem is that we usually do not see them solve tutorial-level tasks. The simple tasks are usually solved already, and the master is doing something complex. Is observing a master doing a complex task really helpful for a complete beginner? -- Essentially, I wonder whether the teaching strategy of primitive societies may be great for tasks with short inferential distance, but less useful for tasks with long inferential distance. You can easily learn cooking by observing you parents, but can you really learn architecture of operating systems the same way?
You definitely can learn some things from difficult crafts. For example, if you watch someone designing databases, and the person always declares a synthetic primary key called “ID” in each table, you are going to notice and remember that, even if you may have no idea why the person is doing that. That is, learning by observation may work better for those tasks where you don’t need to understand why something is done that way; where you can achieve good results by merely repeating the motions. (The charitable perspective here is that each work has parts that are like that, and after you master those parts by copying, you can focus your thinking on the remaining parts and achieve better results than when you have to think about everything.)
It would probably help if the master keeps commenting their work while doing it, thus giving the observer some insight into the mental processes and decisions behind it. Saying the relevant keywords may prompt the observer into further research about the keywords.
It might help if the master instead of (or rather, in addition to) solving real-life difficult problems, would once in a while apply their knowledge to solving a simple problem, as an exercise, which would be less overwhelming for the observer. -- But this already goes against the idea of no intentional teaching, only observing the experts in their natural environment. How dogmatic should we be about that?
So, suppose the master chooses a simple task, creates a new git repository, turns on screen capture, and starts coding, while commenting loudly on the process. Like, saying “I will create a new project”, while creating the new project in IDE on the video, then “first I create the main controller, like this...”, writing the code, running it, writing unit tests, commenting on design decisions like “this is a separate functionality, so I am going to create a new class for that”, etc. At the end, the project is committed to the public repository, so the student can download it and examine at home. The master might give suggestion about further improvements that could be done to the projects, and perhaps provide some hints how. -- So far, this scales well, because there is zero per-student effort that needs to be done. The next step, which does not scale well, could be the student doing the homework, and the master reviewing it, commenting on the good parts, and correcting the bad ones. This could perhaps be limited to the first few paying students (thus generating some reward for the master), but the remaining students could at least observe the videos of the master correcting the work of their colleagues.
Does this plan sound good, or what specific things are still missing there?
I don’t think we should be dogmatic about not teaching, and I should probably edit my post to make that more clear. Ensuring efficient reproduction of knowledge through society is a hard problem—so we shouldn’t limit our tool box. That said, I do understand why a culture would look down upon teaching. It is a delicate craft and it often goes wrong. Especially if the teaching is initiated by the teacher it easily becomes a bit condecending / limiting the freedom of the learner. And nothing can kill you curiosity like an unasked for, or unnecessarily long, lecture.
But yeah, I think what one should aim for is having learning centered on real productive environments, but then of course one can augment that by pointing people to YouTube lectures, or sitting down to show them things, or problem sets, or whatever, as long as that is motivated by a real need right now in the project, not some abstract future utility. And so for coding, one would proably need some onboarding in the form of how to videos and maybe some Codecademy-style learning for the basics.
About the proximal zone of development: yes, that is a hard problem. I assume the easiest way to increase immersive learning is by first doing it for people who are already fairly skilled, so the gap is small. And then gradually you can build more complex structures that allows you to bridge larger gaps. Getting to where a three-year-old can play her way into cancer research is probably pretty far off, at least if they don’t have cancer researchers in the family.
One part of the solution for how to grow the distance between the master and the novice and still stay in the proximal zone of development is to use a layered approach. This is what most apprenticeship models do, at least in a non-European context: you have a lot of novices at different levels of skill, and they imitate each other in a chain all the way up to the master (and of course its not strictly hierarical but a mess of people observing and imitating across different distances of skill; there are also usually several masters, not the typical master-apprenticeship relationship we see in the more regulated markets of medieval Europe).
I think your idea of having masters explain what they do has merit. It is a super useful tool in some circumstances. But if we want to scale access to more people, I think one should not impose too many such demands on masters. It is cognitively taxing and harms productivity.
What is your opinion on progressing from easier to more difficult tasks?
This is a part of what the school environment is supposed to do—bring you a set of very simple exercises, and after you complete them, bring you another set of slightly more difficult exercises. Keeping you in the zone of proximal development, progressing one inferential step at a time, hypothetically until the true mastery.
Seems like we should separate whether schools actually fail at achieving this goal, or whether this very goal is mistaken.
When we observe actual masters solve actual tasks, the problem is that we usually do not see them solve tutorial-level tasks. The simple tasks are usually solved already, and the master is doing something complex. Is observing a master doing a complex task really helpful for a complete beginner? -- Essentially, I wonder whether the teaching strategy of primitive societies may be great for tasks with short inferential distance, but less useful for tasks with long inferential distance. You can easily learn cooking by observing you parents, but can you really learn architecture of operating systems the same way?
You definitely can learn some things from difficult crafts. For example, if you watch someone designing databases, and the person always declares a synthetic primary key called “ID” in each table, you are going to notice and remember that, even if you may have no idea why the person is doing that. That is, learning by observation may work better for those tasks where you don’t need to understand why something is done that way; where you can achieve good results by merely repeating the motions. (The charitable perspective here is that each work has parts that are like that, and after you master those parts by copying, you can focus your thinking on the remaining parts and achieve better results than when you have to think about everything.)
It would probably help if the master keeps commenting their work while doing it, thus giving the observer some insight into the mental processes and decisions behind it. Saying the relevant keywords may prompt the observer into further research about the keywords.
It might help if the master instead of (or rather, in addition to) solving real-life difficult problems, would once in a while apply their knowledge to solving a simple problem, as an exercise, which would be less overwhelming for the observer. -- But this already goes against the idea of no intentional teaching, only observing the experts in their natural environment. How dogmatic should we be about that?
So, suppose the master chooses a simple task, creates a new git repository, turns on screen capture, and starts coding, while commenting loudly on the process. Like, saying “I will create a new project”, while creating the new project in IDE on the video, then “first I create the main controller, like this...”, writing the code, running it, writing unit tests, commenting on design decisions like “this is a separate functionality, so I am going to create a new class for that”, etc. At the end, the project is committed to the public repository, so the student can download it and examine at home. The master might give suggestion about further improvements that could be done to the projects, and perhaps provide some hints how. -- So far, this scales well, because there is zero per-student effort that needs to be done. The next step, which does not scale well, could be the student doing the homework, and the master reviewing it, commenting on the good parts, and correcting the bad ones. This could perhaps be limited to the first few paying students (thus generating some reward for the master), but the remaining students could at least observe the videos of the master correcting the work of their colleagues.
Does this plan sound good, or what specific things are still missing there?
I don’t think we should be dogmatic about not teaching, and I should probably edit my post to make that more clear. Ensuring efficient reproduction of knowledge through society is a hard problem—so we shouldn’t limit our tool box. That said, I do understand why a culture would look down upon teaching. It is a delicate craft and it often goes wrong. Especially if the teaching is initiated by the teacher it easily becomes a bit condecending / limiting the freedom of the learner. And nothing can kill you curiosity like an unasked for, or unnecessarily long, lecture.
But yeah, I think what one should aim for is having learning centered on real productive environments, but then of course one can augment that by pointing people to YouTube lectures, or sitting down to show them things, or problem sets, or whatever, as long as that is motivated by a real need right now in the project, not some abstract future utility. And so for coding, one would proably need some onboarding in the form of how to videos and maybe some Codecademy-style learning for the basics.
About the proximal zone of development: yes, that is a hard problem. I assume the easiest way to increase immersive learning is by first doing it for people who are already fairly skilled, so the gap is small. And then gradually you can build more complex structures that allows you to bridge larger gaps. Getting to where a three-year-old can play her way into cancer research is probably pretty far off, at least if they don’t have cancer researchers in the family.
One part of the solution for how to grow the distance between the master and the novice and still stay in the proximal zone of development is to use a layered approach. This is what most apprenticeship models do, at least in a non-European context: you have a lot of novices at different levels of skill, and they imitate each other in a chain all the way up to the master (and of course its not strictly hierarical but a mess of people observing and imitating across different distances of skill; there are also usually several masters, not the typical master-apprenticeship relationship we see in the more regulated markets of medieval Europe).
I think your idea of having masters explain what they do has merit. It is a super useful tool in some circumstances. But if we want to scale access to more people, I think one should not impose too many such demands on masters. It is cognitively taxing and harms productivity.
I’m glad to see this! I was going to type out a similar but much less well articulated comment.