[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren’t any significant known unknowns left to be resolved.
I don’t see anything in there about a goal system—not even one that optimizes for paperclips. Goetzel and his lot are dualists and panpsychists: how can we expect them to complete a UFAI when they turn to mysticism when asked to design its soul?
the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI.
There are no problems. UFAI could be constructed by a few people who know what they are doing on today’s commodity hardware with only a few years effort.
The outside view on this is that such predictions have been made since the start of A(G)I 50 or 60 years ago, and it’s never panned out. What are the inside-view reasons to believe that this time it will? I’ve only looked through the table of contents of the Goertzel book—is it more than a detailed survey of AGI work to date and speculations about the future, or are he and his co-workers really onto something?
My prediction / contrarian belief is that they are really onto something, with caveats (did you look at the second book? that’s where their own design is outlined).
At the very highest level I think their CogPrime design is correct in the sense that it implements a human-level or better AGI that can solve many useful categories of real world problems, and learn / self-modify to solve those categories it is not well adapted to out of the box.
I do take issue with some of the specific choices they made in both fleshing out components and the current implementation, OpenCog. For example I think using the rule-based PLN logic engine was a critical mistake, but at an architectural level that is a simple change to make since the logic engine is / should be loosly coupled to the rest of the design (it’s not in OpenCog, but c’est la vie. I think a rewrite is necessary anyway for other reasons). I’d swap it out for a form of logical inference based on Bayesian probabalistic graph models a la Pearl. There are various other tweaks I would make regarding the atom space, sub-program representation, and embodiment. I’d also implement the components within the VM language of the AI itself, such that it is able to self-modify its own core capabilities. But at the architectural level these are tweaks of implementation details. It’s remains largly the same design outlined by Goertzel et al.
AI has been around for almost 60 years. However AGI as a discipline was invented by Goertzel et al only in the last 10 to 15 years or so. The story before that is honestly quite a bit more complex, with much of the first 50 years of AI being spent working on the sub-component projects of an integrative AGI. So without prototype solutions to the component problems, I don’t find it at all surprising that progress was not made on integrating the whole.
[Please read the OP before voting. Special voting rules apply.]
The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.
What do you mean by that. Technical all that is required is the proper arrangement of transistors.
I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren’t any significant known unknowns left to be resolved.
Then where’s the AI?
All the pieces for bitcoin were known and available in 1999. Why did it take 10 years to emerge?
I don’t see anything in there about a goal system—not even one that optimizes for paperclips. Goetzel and his lot are dualists and panpsychists: how can we expect them to complete a UFAI when they turn to mysticism when asked to design its soul?
So, um, what’s the problem, then?
There are no problems. UFAI could be constructed by a few people who know what they are doing on today’s commodity hardware with only a few years effort.
The outside view on this is that such predictions have been made since the start of A(G)I 50 or 60 years ago, and it’s never panned out. What are the inside-view reasons to believe that this time it will? I’ve only looked through the table of contents of the Goertzel book—is it more than a detailed survey of AGI work to date and speculations about the future, or are he and his co-workers really onto something?
My prediction / contrarian belief is that they are really onto something, with caveats (did you look at the second book? that’s where their own design is outlined).
At the very highest level I think their CogPrime design is correct in the sense that it implements a human-level or better AGI that can solve many useful categories of real world problems, and learn / self-modify to solve those categories it is not well adapted to out of the box.
I do take issue with some of the specific choices they made in both fleshing out components and the current implementation, OpenCog. For example I think using the rule-based PLN logic engine was a critical mistake, but at an architectural level that is a simple change to make since the logic engine is / should be loosly coupled to the rest of the design (it’s not in OpenCog, but c’est la vie. I think a rewrite is necessary anyway for other reasons). I’d swap it out for a form of logical inference based on Bayesian probabalistic graph models a la Pearl. There are various other tweaks I would make regarding the atom space, sub-program representation, and embodiment. I’d also implement the components within the VM language of the AI itself, such that it is able to self-modify its own core capabilities. But at the architectural level these are tweaks of implementation details. It’s remains largly the same design outlined by Goertzel et al.
AI has been around for almost 60 years. However AGI as a discipline was invented by Goertzel et al only in the last 10 to 15 years or so. The story before that is honestly quite a bit more complex, with much of the first 50 years of AI being spent working on the sub-component projects of an integrative AGI. So without prototype solutions to the component problems, I don’t find it at all surprising that progress was not made on integrating the whole.
Any evidence for that particular belief?
What do you think is missing from the implementation strategy outlined in Goertzel’s Engineering General Intelligence?
Haven’t read it, but I’m guessing a prototype..?
If you had that then you wouldn’t need a few years to implement it now would you.