@Julian: Correct, but if you step outside the evolutionary biology of human intelligence, there’s no way anyone can follow you except by being able to do high-falutin’ theoretical thinking of their own. Meanwhile, the actual evolutionary history strongly contradicts assertions like “you need exponential computing power for linear performance increases” or “the big wins are taken at the start so progress is logarithmic with optimization power expended on the problem”.
Also @HA: I once asked Kurzweil why he kept talking about human brain modeling. He said, in effect, “Because that way we don’t have to understand intelligence.” I said, in effect, “But doing it without understanding is always going to be the most difficult way, not the least difficult way.” He said, “But people aren’t willing to believe you can understand intelligence, so I talk about human brain modeling because it clearly shows that AI is possible.” I said, “That’s a conservative assumption in some ways, but you’re using it to reassure people about the benevolence of AIs, which is not conservative; you can’t get real futuristic predictions by presuming that the thought experiments easiest for the public to accept are the way things will actually happen in real life.” If K has ever responded to that, I have not seen it.
The fundamental folly in AI is trying to bypass the hard problems instead of unraveling the mysteries. (And then you say you can build AI in 10 years, even though you’re not sure exactly how it will work, because you think you can get by without understanding the mysteries...) Modeling the human brain fits exactly into this pattern. Now, just because something fits the pattern of a folly, does not make it technologically impossible; but if you really wanted the case for abstract understanding and against brain modeling, that would be a longer story. It involves observations like, “The more you model, the more you understand at the abstract level too, though not necessarily vice versa” and “Can you name a single case in the history of humanity where a system has been reverse-engineered by duplicating the elementary level without understanding how higher levels of organization worked? If so, how about two cases? Are either of them at all important?”
There’s a reason why we don’t build walking robots by exactly duplicating biological neurology and musculature and skeletons, and that’s because, proof-of-concept or not, by the time you get anywhere at all on the problem, you are starting to understand it well enough to not do it exactly the human way.
Many early “flying machines” had feathers and a beak. They didn’t fly. Same principle. The only reason it sounds plausible is because flying seems so mysterious that you can’t imagine actually, like, solving it. So you imagine doing an exact anatomical imitation of a bird; that way you don’t have to imagine having solved the mystery.
If you want more than that, it’ll take a separate post at some point.
@Julian: Correct, but if you step outside the evolutionary biology of human intelligence, there’s no way anyone can follow you except by being able to do high-falutin’ theoretical thinking of their own. Meanwhile, the actual evolutionary history strongly contradicts assertions like “you need exponential computing power for linear performance increases” or “the big wins are taken at the start so progress is logarithmic with optimization power expended on the problem”.
Also @HA: I once asked Kurzweil why he kept talking about human brain modeling. He said, in effect, “Because that way we don’t have to understand intelligence.” I said, in effect, “But doing it without understanding is always going to be the most difficult way, not the least difficult way.” He said, “But people aren’t willing to believe you can understand intelligence, so I talk about human brain modeling because it clearly shows that AI is possible.” I said, “That’s a conservative assumption in some ways, but you’re using it to reassure people about the benevolence of AIs, which is not conservative; you can’t get real futuristic predictions by presuming that the thought experiments easiest for the public to accept are the way things will actually happen in real life.” If K has ever responded to that, I have not seen it.
The fundamental folly in AI is trying to bypass the hard problems instead of unraveling the mysteries. (And then you say you can build AI in 10 years, even though you’re not sure exactly how it will work, because you think you can get by without understanding the mysteries...) Modeling the human brain fits exactly into this pattern. Now, just because something fits the pattern of a folly, does not make it technologically impossible; but if you really wanted the case for abstract understanding and against brain modeling, that would be a longer story. It involves observations like, “The more you model, the more you understand at the abstract level too, though not necessarily vice versa” and “Can you name a single case in the history of humanity where a system has been reverse-engineered by duplicating the elementary level without understanding how higher levels of organization worked? If so, how about two cases? Are either of them at all important?”
There’s a reason why we don’t build walking robots by exactly duplicating biological neurology and musculature and skeletons, and that’s because, proof-of-concept or not, by the time you get anywhere at all on the problem, you are starting to understand it well enough to not do it exactly the human way.
Many early “flying machines” had feathers and a beak. They didn’t fly. Same principle. The only reason it sounds plausible is because flying seems so mysterious that you can’t imagine actually, like, solving it. So you imagine doing an exact anatomical imitation of a bird; that way you don’t have to imagine having solved the mystery.
If you want more than that, it’ll take a separate post at some point.