For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
http://www.isle.org/~langley/
His competing AGI conference series:
http://www.cogsys.org/
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
Thanks!