I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.
I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
Don’t argue about, it, put openCog up for a .TT.
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
You’re assuming chatting is not a task.
.NL is also a pre requisite for a wide range of other tasks: an entity that lacks it will not be able to write books or tell jokes.
It seems as though you have trivialised the “general” into “able to do whatever it can do, but not able to do anything else”.
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.