All that this technique does for me is automating what I have already been doing correctly (which can be useful). However, if I misread music, play it a hundred times, and then see my mistake and try to correct it, I will be much worse off than if I had played it carefully at first and never tolerated an error long enough for it to sink in. Stopping and checking that they understand WHAT they need to change before making them try again is important.
I agree. It is analogous to the way that human muscular control varies. Tasks are performed somewhat stochastically and a result is less sensitivity to small errors and more efficient energy use. Playing is like Metropolis or Gibbs sampling, steadily but randomly figuring out what works by iterated perturbations. It is truly getting your map to match the territory (the ambient probability or transition model).
Yeah, I am a stats / applied math guy. I work on machine learning and computer vision. I’m surprised all the time at how different multi-agent learning theory is from the CS perspective. I think of it as: applied math / engineering / applied stats tend to work on learning from the bottom up where as A.I. researchers tend to look top down.
All that this technique does for me is automating what I have already been doing correctly (which can be useful). However, if I misread music, play it a hundred times, and then see my mistake and try to correct it, I will be much worse off than if I had played it carefully at first and never tolerated an error long enough for it to sink in. Stopping and checking that they understand WHAT they need to change before making them try again is important.
Intentional variation and/or permitting variation (otherwise known as playing) may serve you better than trying to get it right too early.
I agree. It is analogous to the way that human muscular control varies. Tasks are performed somewhat stochastically and a result is less sensitivity to small errors and more efficient energy use. Playing is like Metropolis or Gibbs sampling, steadily but randomly figuring out what works by iterated perturbations. It is truly getting your map to match the territory (the ambient probability or transition model).
I’m curious: are you a stats guy? do you have a site? I am interested in what LW stats people work on.
Yeah, I am a stats / applied math guy. I work on machine learning and computer vision. I’m surprised all the time at how different multi-agent learning theory is from the CS perspective. I think of it as: applied math / engineering / applied stats tend to work on learning from the bottom up where as A.I. researchers tend to look top down.