You maybe should have mentioned the earlier discussion of your idea on the open thread, in which I believed I spotted some critical problems with where you’re going: you seem to be endorsing a sort of “blank slate” model in that humans have a really good reasoning engine, and the stimuli humans get after birth are sufficient to make all the right inferences.
However, all experimental evidence tells us (cf. Pinker’s The Blank Slate) that humans make a significantly smaller set of inferences on our sense data than are logically possible under constraint of Occam’s razor; there are grammatical errors that children never make in any language; there are expectations babies all have, at the same time, though none has gathered enough postnatal sense data to justify such inferences, etc.
I conclude that it is fruitless to attempt to find “general intelligence” by looking at what general algorithm would make the inferences human do, given postnatal stimuli. My alternative suggestion is to identify human intelligence as a combination of general reasoning and pre-encoding of environment-specific knowledge that humans do not have to entirely relearn after birth because the brain wiring-up in the womb already filters out inference patterns that don’t win.
That knowledge can come from the “accumulated wisdom” of the evolution history, meaning you need to account for how that data was transformed in a human’s present internal model.
ETA: Wow, I was sloppy when I wrote this; hope the point was able to shine through. Typos and missing words corrected. Should make more sense now.
The reason I didn’t link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.
But you’ve made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren’t “blank slate with great algorithm”; they’re “heavily formatted slate with respectable context-specific algorithm”.
Let’s postpone this debate until the main points become a bit more clear. I don’t think of myself as “treating humans” at all, much less as a blank slate!
Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
You maybe should have mentioned the earlier discussion of your idea on the open thread, in which I believed I spotted some critical problems with where you’re going: you seem to be endorsing a sort of “blank slate” model in that humans have a really good reasoning engine, and the stimuli humans get after birth are sufficient to make all the right inferences.
However, all experimental evidence tells us (cf. Pinker’s The Blank Slate) that humans make a significantly smaller set of inferences on our sense data than are logically possible under constraint of Occam’s razor; there are grammatical errors that children never make in any language; there are expectations babies all have, at the same time, though none has gathered enough postnatal sense data to justify such inferences, etc.
I conclude that it is fruitless to attempt to find “general intelligence” by looking at what general algorithm would make the inferences human do, given postnatal stimuli. My alternative suggestion is to identify human intelligence as a combination of general reasoning and pre-encoding of environment-specific knowledge that humans do not have to entirely relearn after birth because the brain wiring-up in the womb already filters out inference patterns that don’t win.
That knowledge can come from the “accumulated wisdom” of the evolution history, meaning you need to account for how that data was transformed in a human’s present internal model.
ETA: Wow, I was sloppy when I wrote this; hope the point was able to shine through. Typos and missing words corrected. Should make more sense now.
The reason I didn’t link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.
But you’ve made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren’t “blank slate with great algorithm”; they’re “heavily formatted slate with respectable context-specific algorithm”.
Let’s postpone this debate until the main points become a bit more clear. I don’t think of myself as “treating humans” at all, much less as a blank slate!
Could you at least give some signal of your idea’s quality that distinguishes it from the millions with hopeless ideas who scream “You guys are doing it all wrong, I’ve got something that’s just totally different from everything else and will get it right this time”?
Because a lot of what you’ve said so far isn’t promising.
Yikes, take it easy. When I said “let’s argue”, I meant let’s argue after I’ve made some of my main points.
Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can’t find a sign of anything promising, and you’ve had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.
I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.
LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky’s lessons on rationality; I’d hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.
Which will be soon, right?