I was just re-reading the sequences, and I have to say that as a teacher I really think you’re misjudging what is happening here.
Much of learning, it seems, is building up a mental framework, starting from certain concepts and attaching new concepts to them so that they can easily be recalled later and so you can use the connections between concepts to develop your own thinking later..
From my point of view, it looks like the student (perhaps as long as a year ago) had successfully created a new concept node in their mind, “heat conduction”. They had connected this node to the concepts of heat transfer and physics. And even though they likely hadn’t activated this node at all in perhaps a year, they were able to take a specific example of something they saw in the real world, generalize it to something that might be related to the more general topic of heat transfer in physics, and create a hypothesis of heat conduction.
If you saw a machine learning algorithm that was able to do all that, you’d really be impressed! Something like Watson might be able to go from the concept of heat transfer to heat conduction, but it wouldn’t be able to generalize from a specific example of heat transfer it saw in the real world.
Now, they might not yet have a lot of details attached to the “heat conduction” concept node in their head. But that’s ok, that they can learn, that gives you something to build on as a teacher. If you teach it well and they can attach some details and images and maybe some math to the concept of “heat conduction” in the head, then hopefully next time they’ll say “Maybe heat conduction? Hmmm, no, that doesn’t work.” which is even better. But there’s more going on here then just “guessing a password”; this is part of what constructing a model of the world looks like while the process is only partly completed.
I’m not sure you’ve described a different mistake than Eliezer has?
Certainly, a student with a sufficiently incomplete understanding of heat conduction is going to have lots of lines of thought that terminate in question marks. The thesis of the post, as I read it, is that we want to be able to recognize when our thoughts terminate in question marks, rather than assuming we’re doing something valid because our words sound like things the professor might say.
Yeah, that’s fair, although it sounds like the student he’s quoting did understand that.
I’m just saying that “guessing the teacher’s password” isn’t usually a fair way to view what’s going in in cases like this. “Building up a concept map of connections between related concepts” is probably more accurate, and that really is a vital part of the learning process, it’s not a bad thing at all.
I was just re-reading the sequences, and I have to say that as a teacher I really think you’re misjudging what is happening here.
Much of learning, it seems, is building up a mental framework, starting from certain concepts and attaching new concepts to them so that they can easily be recalled later and so you can use the connections between concepts to develop your own thinking later..
From my point of view, it looks like the student (perhaps as long as a year ago) had successfully created a new concept node in their mind, “heat conduction”. They had connected this node to the concepts of heat transfer and physics. And even though they likely hadn’t activated this node at all in perhaps a year, they were able to take a specific example of something they saw in the real world, generalize it to something that might be related to the more general topic of heat transfer in physics, and create a hypothesis of heat conduction.
If you saw a machine learning algorithm that was able to do all that, you’d really be impressed! Something like Watson might be able to go from the concept of heat transfer to heat conduction, but it wouldn’t be able to generalize from a specific example of heat transfer it saw in the real world.
Now, they might not yet have a lot of details attached to the “heat conduction” concept node in their head. But that’s ok, that they can learn, that gives you something to build on as a teacher. If you teach it well and they can attach some details and images and maybe some math to the concept of “heat conduction” in the head, then hopefully next time they’ll say “Maybe heat conduction? Hmmm, no, that doesn’t work.” which is even better. But there’s more going on here then just “guessing a password”; this is part of what constructing a model of the world looks like while the process is only partly completed.
I’m not sure you’ve described a different mistake than Eliezer has?
Certainly, a student with a sufficiently incomplete understanding of heat conduction is going to have lots of lines of thought that terminate in question marks. The thesis of the post, as I read it, is that we want to be able to recognize when our thoughts terminate in question marks, rather than assuming we’re doing something valid because our words sound like things the professor might say.
Yeah, that’s fair, although it sounds like the student he’s quoting did understand that.
I’m just saying that “guessing the teacher’s password” isn’t usually a fair way to view what’s going in in cases like this. “Building up a concept map of connections between related concepts” is probably more accurate, and that really is a vital part of the learning process, it’s not a bad thing at all.