Possibly. But you were selling this as one of the most important things to focus on if we wish to improve education. Given the immense effort that this would require, it’s massively unobvious that this would be the most cost-effective thing to concentrate on right now.
I would put stuff like improving motivation by making the content of the teaching more relevant to people’s daily lives, breaking down the artificial distinctions between subjects, applying the educational and motivational techniques that the makers of commercial videogames have developed, etc. etc. as things that were more useful to focus on at this moment. If the students are motivated enough, they can work out the problems themselves, even if not all of the inferential steps were completely explained.
It is one of the most important things and extremely cost effective
Given enough data, the dependency tree can be inferred implicity from and we can also learn to accurately identify when a student doesn’t understand a concept. I’m actually working on this exact problem right now
adamzerner said that he wanted a really fine-grained dependency map, at an even lower level than this. Are you also talking about such a level of fine-grainedness? I agree that dependency maps on the level of that map are likely to be cost-effective, it’s the more fine-grained ones that I was skeptical of.
If you are talking about an even lower level than the one displayed in that map, I’d love to hear more about it.
It is possible to detect correlations between questions (or even individual steps) and to implicitly determine skills at an even more fine grained level than that—given sufficient data. We aren’t aiming to get that specific right at the moment—indeed, we mightn’t have collected enough data yet to do that—but there’s no reason why we shouldn’t be able to do that
Having tagged data is extremely useful for designing these kinds of algorithms, but we would only require a proportion of the data to be tagged. So it wouldn’t be anywhere near as costly as you expect
Ohh, the Big Data approach. That makes sense, and also explains why it would be possible to do cost-effectively now when previous attempts haven’t gotten very far. Not sure why I didn’t think of that. Okay, formally retracting my skepticism.
Possibly. But you were selling this as one of the most important things to focus on if we wish to improve education. Given the immense effort that this would require, it’s massively unobvious that this would be the most cost-effective thing to concentrate on right now.
I would put stuff like improving motivation by making the content of the teaching more relevant to people’s daily lives, breaking down the artificial distinctions between subjects, applying the educational and motivational techniques that the makers of commercial videogames have developed, etc. etc. as things that were more useful to focus on at this moment. If the students are motivated enough, they can work out the problems themselves, even if not all of the inferential steps were completely explained.
It is one of the most important things and extremely cost effective
Given enough data, the dependency tree can be inferred implicity from and we can also learn to accurately identify when a student doesn’t understand a concept. I’m actually working on this exact problem right now
adamzerner said that he wanted a really fine-grained dependency map, at an even lower level than this. Are you also talking about such a level of fine-grainedness? I agree that dependency maps on the level of that map are likely to be cost-effective, it’s the more fine-grained ones that I was skeptical of.
If you are talking about an even lower level than the one displayed in that map, I’d love to hear more about it.
It is possible to detect correlations between questions (or even individual steps) and to implicitly determine skills at an even more fine grained level than that—given sufficient data. We aren’t aiming to get that specific right at the moment—indeed, we mightn’t have collected enough data yet to do that—but there’s no reason why we shouldn’t be able to do that
Having tagged data is extremely useful for designing these kinds of algorithms, but we would only require a proportion of the data to be tagged. So it wouldn’t be anywhere near as costly as you expect
Ohh, the Big Data approach. That makes sense, and also explains why it would be possible to do cost-effectively now when previous attempts haven’t gotten very far. Not sure why I didn’t think of that. Okay, formally retracting my skepticism.