Deutsch is interesting. He seems very close to the LW camp, and I think he’s someone LWers should at least be familiar with. (This article is not as good an introduction as The Beginning of Infinity, I think.)
I suspect, personally, that the conflict between “Popperian conjecture and criticism” and the LW brand of Bayesianism is a paper tiger. See this comment thread in particular.
Deutsch is right that a huge part of artificial general intelligence is the ability to infer explanatory models from experience from the complete (infinite!) set of possible explanations, rather than just fit parameters to a limited set of hardcoded explanatory models (as AI programs today work). But that’s what I think people here think (generally under the name Solomonoff induction).
Deutsch is interesting. He seems very close to the LW camp, and I think he’s someone LWers should at least be familiar with.
Deutsch seems pretty clueless in the section quoted below. I don’t see why students should be interested in what he has to say on this topic.
It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.
I don’t see why students should be interested in what he has to say on this topic.
He’s clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it’s relatively quick to dispense with his confusions if you know the right response, but if you can’t it points out places you need to shore up your knowledge. (Here I’m using the general you; I’m pretty sure you didn’t have any trouble, Tim.)
I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don’t expect it to be novel content for many (instead of just novel emphasis).
Deutsch is interesting. He seems very close to the LW camp, and I think he’s someone LWers should at least be familiar with. (This article is not as good an introduction as The Beginning of Infinity, I think.)
I suspect, personally, that the conflict between “Popperian conjecture and criticism” and the LW brand of Bayesianism is a paper tiger. See this comment thread in particular.
Deutsch is right that a huge part of artificial general intelligence is the ability to infer explanatory models from experience from the complete (infinite!) set of possible explanations, rather than just fit parameters to a limited set of hardcoded explanatory models (as AI programs today work). But that’s what I think people here think (generally under the name Solomonoff induction).
Deutsch seems pretty clueless in the section quoted below. I don’t see why students should be interested in what he has to say on this topic.
He’s clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it’s relatively quick to dispense with his confusions if you know the right response, but if you can’t it points out places you need to shore up your knowledge. (Here I’m using the general you; I’m pretty sure you didn’t have any trouble, Tim.)
I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don’t expect it to be novel content for many (instead of just novel emphasis).