I don’t see why students should be interested in what he has to say on this topic.
He’s clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it’s relatively quick to dispense with his confusions if you know the right response, but if you can’t it points out places you need to shore up your knowledge. (Here I’m using the general you; I’m pretty sure you didn’t have any trouble, Tim.)
I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don’t expect it to be novel content for many (instead of just novel emphasis).
He’s clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it’s relatively quick to dispense with his confusions if you know the right response, but if you can’t it points out places you need to shore up your knowledge. (Here I’m using the general you; I’m pretty sure you didn’t have any trouble, Tim.)
I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don’t expect it to be novel content for many (instead of just novel emphasis).