Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.
Well, for one thing, Chapman was (at least at one point) a genuine, credentialed AI researcher and a good fraction of content on Less Wrong seems to be a kind of armchair AI-research. That’s the outside view, anyway. The inside view (from my perspective) matches your evaluation: he seems just plain wrong.
I think a few people here are credentialed, or working on their credentials in machine learning.
But almost everything useful I learned, I learned by just reading the literature. There were three main guys I thought had good answers—David Wolpert, Jaynes, and Pearl. I think time has put it’s stamp on approval on my taste.
Reading more from Chapman, he seems fairly reasonable as far as AI goes, but he’s got a few ideological axes to grind against some straw men.
On his criticisms of LW and Bayesianism, is there anyone here who doesn’t realize you need algorithms and representations beyond Bayes Rule? I think not too long ago we had a similar straw man massacre where everyone said “yeah, we have algorithms that do information processing other than Bayes rule—duh”.
And he really should have stuck it out longer in AI, as Hinton has gone a long way to solving the problem Chapman thought was insurmountable—getting proper representation of the space to analyze from the data without human spoon feeding. You need a hidden variable model of the observable data, and should be able to get it from prediction of subsets of the observables using the other observables. That much was obvious, it just took Hinton to find a good way to do it. Others are coming up with generalized learning modules and mapping them to brain constructs. There was never any need to despair of progress.