I’m fine with you redirecting to a previous post, but I would have appreciated at least a one sentence-summary and opinion.
My opinion is: I think if you want to figure out the gory details of the neocortical algorithm, and you want to pick ten authors to read, then Jeff Hawkins should be one of them. If you’re only going to pick one author, I’d go with Dileep George.
I’m happy to chat more offline.
what is the argument for the neocortex learning algorithm being human-legible?
Well there’s an inside-view argument that it’s human-legible because “It basically works like, blah blah blah, and that algorithm is human-legible because I’m a human and I just legibled it.” I guess that’s what Jeff would say. (Me too.)
Then there’s an outside-view argument that goes “most of the action is happening within a “cortical mini-column”, which consists of about 100 neurons mostly connected to each other. Are you really going to tell me that 100 neurons implements an algorithm that is so complicated that it’s forever beyond human comprehension? Then again, BB(5) is still unknown, so circuits with a small number of components can be quite complicated. So I guess that’s not all that compelling an argument on its own.
I think a better outside-view argument is that if one algorithm is really going to learn how to parse visual scenes, put on a shoe, and design a rocket engine … then such an algorithm really has to work by simple, general principles—things like “if you’ve seen something, it’s likely that you’ll see it again”, and “things are often composed of other things”, and “things tend to be localized in time and space”, and TD learning, etc.
Also, GPT-3 shows that human-legible learning algorithms are at least up to the task of learning language syntax and semantics, plus learning quite a bit of knowledge about how the world works.
things like common sense and human morals might be easier to push for.
For common sense, my take is that it’s plausible that a neocortex-like AGI will wind up with some of the same concepts as humans, in certain areas and under certain conditions. That’s a hard thing to guarantee a priori, and therefore I’m not quite sure what that buys you.
For morals, there is a plausible research direction of “Let’s make AGIs with a similar set of social instincts as humans. Then they would wind up with similar moral intuitions, even when pushed to weird out-of-distribution hypotheticals. And then we can do better by turning off jealousy, cranking up conservatism and sympathy, etc.” That’s a research direction I take seriously, although it’s not the only path to success. (It might be the only path to success that doesn’t fundamentally rely on transparency.) It faces the problem that we don’t currently know how to write the code for human-like social instincts, which could wind up being quite complicated. (See discussion here—relevant quote is: “I can definitely imagine that the human brain has an instinctual response to a certain input which is adaptive in 500 different scenarios that ancestral humans typically encountered, and maladaptive in another 499 scenarios that ancestral humans typically encountered. So on average it’s beneficial, and our brains evolved to have that instinct, but there’s no tidy story about why that instinct is there and no simple specification for exactly what calculation it’s doing.”)
Thanks!
My opinion is: I think if you want to figure out the gory details of the neocortical algorithm, and you want to pick ten authors to read, then Jeff Hawkins should be one of them. If you’re only going to pick one author, I’d go with Dileep George.
I’m happy to chat more offline.
Well there’s an inside-view argument that it’s human-legible because “It basically works like, blah blah blah, and that algorithm is human-legible because I’m a human and I just legibled it.” I guess that’s what Jeff would say. (Me too.)
Then there’s an outside-view argument that goes “most of the action is happening within a “cortical mini-column”, which consists of about 100 neurons mostly connected to each other. Are you really going to tell me that 100 neurons implements an algorithm that is so complicated that it’s forever beyond human comprehension? Then again, BB(5) is still unknown, so circuits with a small number of components can be quite complicated. So I guess that’s not all that compelling an argument on its own.
I think a better outside-view argument is that if one algorithm is really going to learn how to parse visual scenes, put on a shoe, and design a rocket engine … then such an algorithm really has to work by simple, general principles—things like “if you’ve seen something, it’s likely that you’ll see it again”, and “things are often composed of other things”, and “things tend to be localized in time and space”, and TD learning, etc.
Also, GPT-3 shows that human-legible learning algorithms are at least up to the task of learning language syntax and semantics, plus learning quite a bit of knowledge about how the world works.
For common sense, my take is that it’s plausible that a neocortex-like AGI will wind up with some of the same concepts as humans, in certain areas and under certain conditions. That’s a hard thing to guarantee a priori, and therefore I’m not quite sure what that buys you.
For morals, there is a plausible research direction of “Let’s make AGIs with a similar set of social instincts as humans. Then they would wind up with similar moral intuitions, even when pushed to weird out-of-distribution hypotheticals. And then we can do better by turning off jealousy, cranking up conservatism and sympathy, etc.” That’s a research direction I take seriously, although it’s not the only path to success. (It might be the only path to success that doesn’t fundamentally rely on transparency.) It faces the problem that we don’t currently know how to write the code for human-like social instincts, which could wind up being quite complicated. (See discussion here—relevant quote is: “I can definitely imagine that the human brain has an instinctual response to a certain input which is adaptive in 500 different scenarios that ancestral humans typically encountered, and maladaptive in another 499 scenarios that ancestral humans typically encountered. So on average it’s beneficial, and our brains evolved to have that instinct, but there’s no tidy story about why that instinct is there and no simple specification for exactly what calculation it’s doing.”)