Yeah, I’d say that’s a fair approximation. The AI needs a way to compress lots of input data into a hierarchy of functional categories. It needs a way to recognize a cluster of information as, say, a hammer. It also needs to recognize similarities between a hammer and a stick or a crow bar or even a chair leg, in order to queue up various policies for using that hammer (if you’ve read Hofstadter, think of analogies) - very roughly, the utility function guides what it “wants” done, the statistical inference guides how it does it (how it figures out what actions will accomplish its goals). That seems to be more or less what we need for a machine to do quite a bit.
If you’re just looking to build any AGI, he hard part of those two seems to be getting a nice, working method for extracting statistical features from its environment in real time. The (significantly) harder of the two for a Friendly AI is getting the utility function right.
Yeah, I’d say that’s a fair approximation. The AI needs a way to compress lots of input data into a hierarchy of functional categories. It needs a way to recognize a cluster of information as, say, a hammer. It also needs to recognize similarities between a hammer and a stick or a crow bar or even a chair leg, in order to queue up various policies for using that hammer (if you’ve read Hofstadter, think of analogies) - very roughly, the utility function guides what it “wants” done, the statistical inference guides how it does it (how it figures out what actions will accomplish its goals). That seems to be more or less what we need for a machine to do quite a bit.
If you’re just looking to build any AGI, he hard part of those two seems to be getting a nice, working method for extracting statistical features from its environment in real time. The (significantly) harder of the two for a Friendly AI is getting the utility function right.