I’m curious—who are the AI and AGI designers- seeing one hasn’t been publicly built yet. Or is this other researchers in the AGI field. If you are looking for feedback from a technical though not academic, I’d be very interested in assisting.
There are a half-dozen AGI projects with working implementations. There are multiple annual conferences where people working on AGI share their results. There’s literature on the subject going back decades, really to the birth of AI in the 50′s and 60′s. The term AGI itself was coined by people working in this field to describe what they are building. Maybe you mean something different than AGI when say “one hasn’t been publicly built yet” ?
There seems to be some serious miscommunication going on here. By “AGI”, do you mean a being capable of a wide variety of cognitive tasks, including passing the Turing Test? By “AGI project”, do you mean an actual AGI, and not just a project with AGI as its goal? By “working implementation”, do you mean actually achieving AGI, or just achieving some milestone on the way?
I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.
Not sure yet—taking advice. The AI people are narrow AI developers, and the AGI people are those that are actually planning to build an AGI (eg Ben Goertzl).
For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
I’m curious—who are the AI and AGI designers- seeing one hasn’t been publicly built yet. Or is this other researchers in the AGI field. If you are looking for feedback from a technical though not academic, I’d be very interested in assisting.
There are a half-dozen AGI projects with working implementations. There are multiple annual conferences where people working on AGI share their results. There’s literature on the subject going back decades, really to the birth of AI in the 50′s and 60′s. The term AGI itself was coined by people working in this field to describe what they are building. Maybe you mean something different than AGI when say “one hasn’t been publicly built yet” ?
There seems to be some serious miscommunication going on here. By “AGI”, do you mean a being capable of a wide variety of cognitive tasks, including passing the Turing Test? By “AGI project”, do you mean an actual AGI, and not just a project with AGI as its goal? By “working implementation”, do you mean actually achieving AGI, or just achieving some milestone on the way?
I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
Don’t argue about, it, put openCog up for a .TT.
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
You’re assuming chatting is not a task.
.NL is also a pre requisite for a wide range of other tasks: an entity that lacks it will not be able to write books or tell jokes.
It seems as though you have trivialised the “general” into “able to do whatever it can do, but not able to do anything else”.
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.
Can you list the 6 working AGI projects—I’d be interested but I suspect we are talking about different things.
OpenCog, NARS, LIDA, Soar, ACT-R, MicroPsi. More:
http://wiki.opencog.org/w/AGI_Projects http://bicasociety.org/cogarch/architectures.htm
Not sure yet—taking advice. The AI people are narrow AI developers, and the AGI people are those that are actually planning to build an AGI (eg Ben Goertzl).
For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
http://www.isle.org/~langley/
His competing AGI conference series:
http://www.cogsys.org/
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
Thanks!