1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.
My two cents here are just:
1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what’s going to eventualy crack the hardest problems.
2) OpenCog’s Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren’t a solution.