“There’s a reason why we don’t build walking robots by exactly duplicating biological neurology and musculature and skeletons, and that’s because, proof-of-concept or not, by the time you get anywhere at all on the problem, you are starting to understand it well enough to not do it exactly the human way.”
That’s trivially (usually) true, which is why I’m curious about the degree to which AI old-timers have considered and dismissed human brain modeling in conjunction with the types of exponential technological increases Kurzweil publicized when they claim that their experience leads them to believe AGI isn’t likely to arise for several generations (on the order of 100 years), rather than in one generation. Particularly if they’ve been working mostly on abstract AGI rather than doing substantial work with human brain modeling, too.
“There’s a reason why we don’t build walking robots by exactly duplicating biological neurology and musculature and skeletons, and that’s because, proof-of-concept or not, by the time you get anywhere at all on the problem, you are starting to understand it well enough to not do it exactly the human way.”
That’s trivially (usually) true, which is why I’m curious about the degree to which AI old-timers have considered and dismissed human brain modeling in conjunction with the types of exponential technological increases Kurzweil publicized when they claim that their experience leads them to believe AGI isn’t likely to arise for several generations (on the order of 100 years), rather than in one generation. Particularly if they’ve been working mostly on abstract AGI rather than doing substantial work with human brain modeling, too.