The first one you mention IS the OpenCog model ;) The second isn’t really an architecture.
There are ideas for AGI based on pure NN primitives—such as what DeepMind is working towards—but so far they are just ideas and napkin sketches. The only working general intelligence codebases are GOFAI to varying degrees at this time.
Personal phenomenological observation: when I write a text, I feel like some generative network creates a text stream similar to everything I read before, so it works like RNN described by Karpathy. But above it is reasoning engine, which checks if there is some meaning in this generated stream.
The first one you mention IS the OpenCog model ;) The second isn’t really an architecture.
There are ideas for AGI based on pure NN primitives—such as what DeepMind is working towards—but so far they are just ideas and napkin sketches. The only working general intelligence codebases are GOFAI to varying degrees at this time.
Personal phenomenological observation: when I write a text, I feel like some generative network creates a text stream similar to everything I read before, so it works like RNN described by Karpathy. But above it is reasoning engine, which checks if there is some meaning in this generated stream.