I watch my coworkers teach their class, and it amazes me how often they tell their kids to swim, watch them, and say “that was bad. Do it again.” As if that comment is useful in any way. As if doing the same thing over and over again will ever produce different results.*
Actually, this “technique” is rather useful for training muscle memory. For example, it’s how I learn to play songs on the piano: I keep doing it wrong until I can do it right.
All that this technique does for me is automating what I have already been doing correctly (which can be useful). However, if I misread music, play it a hundred times, and then see my mistake and try to correct it, I will be much worse off than if I had played it carefully at first and never tolerated an error long enough for it to sink in. Stopping and checking that they understand WHAT they need to change before making them try again is important.
I agree. It is analogous to the way that human muscular control varies. Tasks are performed somewhat stochastically and a result is less sensitivity to small errors and more efficient energy use. Playing is like Metropolis or Gibbs sampling, steadily but randomly figuring out what works by iterated perturbations. It is truly getting your map to match the territory (the ambient probability or transition model).
Yeah, I am a stats / applied math guy. I work on machine learning and computer vision. I’m surprised all the time at how different multi-agent learning theory is from the CS perspective. I think of it as: applied math / engineering / applied stats tend to work on learning from the bottom up where as A.I. researchers tend to look top down.
Actually, this “technique” is rather useful for training muscle memory. For example, it’s how I learn to play songs on the piano: I keep doing it wrong until I can do it right.
All that this technique does for me is automating what I have already been doing correctly (which can be useful). However, if I misread music, play it a hundred times, and then see my mistake and try to correct it, I will be much worse off than if I had played it carefully at first and never tolerated an error long enough for it to sink in. Stopping and checking that they understand WHAT they need to change before making them try again is important.
Intentional variation and/or permitting variation (otherwise known as playing) may serve you better than trying to get it right too early.
I agree. It is analogous to the way that human muscular control varies. Tasks are performed somewhat stochastically and a result is less sensitivity to small errors and more efficient energy use. Playing is like Metropolis or Gibbs sampling, steadily but randomly figuring out what works by iterated perturbations. It is truly getting your map to match the territory (the ambient probability or transition model).
I’m curious: are you a stats guy? do you have a site? I am interested in what LW stats people work on.
Yeah, I am a stats / applied math guy. I work on machine learning and computer vision. I’m surprised all the time at how different multi-agent learning theory is from the CS perspective. I think of it as: applied math / engineering / applied stats tend to work on learning from the bottom up where as A.I. researchers tend to look top down.