At what point do tools start to become agents? In other words, what are the defining characteristics of tools that become agents? How do you imagine the development of tool AI: (1) each generation is incrementally more prone to become an agent (2) tools start to become agents after invention X or (3) there will be be no incremental development leading up to it at all but rather a sudden breakthrough?
Seems like X is (or includes) the ability to think about self-modification: awareness of its own internal details and modelling their possible changes.
Note that without this ability the tool could invent a plan which leads to its own accidental destruction (and possibly not completing the plan), because it does not realize it could be destroyed or damaged.
I think of agents having goals and pursuing them by default. I dont see how self reflexive abilities.… ” think about self-modification: awareness of its own internal details and modelling their possible changes.”...add up to goals. It might be intuitive that a self aware entity would want to preserve its existence...but that intuition could be driven by anthropomorphism, (or zoomorphism , or biomorphism)
With self-reflective abilities, the system can also consider paths including self-modification in reaching its goal. Some of those paths may be highly unintuitive for humans, so we wouldn’t notice some possible dangers. Self-modification may also remove some safety mechanisms.
A system that explores many paths can find a solutions humans woudln’t notice. Such “creativity” at object level is relatively harmless. Google Maps may find you a more efficient path to your work than the one you use now, but that’s okay. Maybe the path is wrong for some reasons that Google Maps does not understand (e.g. it leads through a neighborhood with high crime), but at least on general level you understand that such is the risk of following the outputs blindly. However, similar “creativity” at self-modification level can have unexpected serious consequences.
“the system can also”, “some of those paths may be”, “may also remove”. Those are some highly conditional statements. Quantify, please, or else this is no different than “the LHC may destroy us all with a mini black hole!”
I’d need to have a specific description of the system, what exactly it can do, and how exactly it can modify itself, to give you a specific example of self-modification that contributes to the specific goal in a perverse way.
I can invent an example, but then you can just say “okay, I wouldn’t use that specific system”.
As an example: Imagine that you have a machine with two modules (whatever they are) called Module-A and Module-B. Module-A is only useful for solving Type-A problems. Module-B is only useful for solving Type-B problems. At this moment, you have a Type-A problem, and you ask the machine to solve it as cheaply as possible. The machine has no Type-B problem at the moment. So the machine decides to sell its Module-B on ebay, because it is not necessary now, and the gained money will reduce the total cost of solving your problem. This is short-sighted, because tomorrow you may need to solve a Type-B problem. But the machine does not predict your future wishes.
I can invent an example, but then you can just say “okay, I wouldn’t use that specific system”.
But can’t you see, that’s entirely the point!
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern. It’s like saying “bridges won’t survive earthquakes! you are crazy and irresponsible to build a bridge in an area with earthquakes!” And then I design a bridge that can survive earthquakes smaller than magnitude X, where X magnitude earthquakes have a likelihood of occurring less than 1 in 10,000 years, then on top of that throw an extra safety margin of 20% on because we have the extra steel available. Now how crazy and irresponsible is it?
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern.
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.
That’s a complicated and interesting question, that quite a few smart people have been thinking about. Fortunately, I don’t need to solve it to get the point above.
I’m suspecting “tool” versus “agent” is a magical category whose use is really talking about the person using it.
I think the concepts are clear at the extremes, but they tend to get muddled in the middle.
Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?
I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?
And if these distinctions don’t make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?
I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
It would, however, be interesting to. This discussion has come around before. What I said there:
We may need another word for “agent with intentionality”—the way the word “agent” is conventionally used is closer to “daemon”, i.e. tool set to run without user intervention.
I’m not sure even having a world-model is a relevant distinction—I fully expect sysadmin tools to be designed to form something that could reasonably be called a world model within my working lifetime (which means I’d be amazed if they don’t exist now). A moderately complex Puppet-run system can already be a bit spooky.
Note that mere daemon-level tools exist that many already consider unFriendly, e.g. high-frequency trading systems.
Note that mere daemon-level tools exist that many already consider unFriendly, e.g. high-frequency trading systems.
A high-frequency trading system seems no more complex or agenty to me than rigging a shotgun to shoot at a door when someone opens the door from the outside. Am I wrong about this?
To be clear, what I think I know about high-frequency trading systems is that through technology they are able to front run certain orders they see to other exchanges when these orders are being sent to multiple exchanges in a non-simultaneous way. The thing that makes them unfriendly is that they are designed by people who understand order dynamics at the microsecond level to exploit people who trade lots of stock but don’t understand the technicalities of order dynamics. That market makers are allowed to profit by selling information flow to high-frequency traders that, on examination, allows them to subvert the stated goals of a “fair” market is all part of the unfriendliness.
But high-frequency programs execute pretty simple instructions quite repeatably, they are not adaptive in a general sense or even particularly complex, they are mostly just fast.
Mmm … I think we’re arguing definitions of ill-defined categories at this point. Sort of “it’s not an AI if I understand it.” I was using it as an example of a “daemon” in the computing sense, a tool trusted to run without further human intervention—not something agenty.
Intentionality meaning ” “the power of minds to be about, to represent, or to stand for, things, properties and states of affairs”, …or intentionally meaning purpose?
That’s a complicated and interesting question, that quite a few smart people have been thinking about. Fortunately, I don’t need to solve it to get the point above.
How do you decide at what point your grasp of a hypothetical system is sufficient, and the probability that it will be build large and robust enough, for it to make sense to start thinking about hypothetical failure modes?
? Explain. I can certainly come up with two hypothetical AI designs, call one a tool and the other an agent (and expect that almost everyone would agree with this, because tool vs agent is clearer at the extremities than in the middle), set up a toy situation, and note that the tools top plan is to make itself into the agent design. The “tool wants to be agent” is certainly true, in this toy setup.
The real question is how much this toy example generalises to real-world scenarios, which is a much longer project. Daniel Dewey has been doing some general work in that area.
My perception, possibly misperception, is that you are too focused on vague hypotheticals. I believe that it is not unlikely that future tool AI will be based on, or be inspired by (at least partly), previous generations of tool AI that did not turn themselves into agent AIs. I further believe that, instead of speculating about specific failure modes, it would be fruitful to research whether we should expect some sort of black swan event in the development of these systems.
I think the idea around here is to expect a strong discontinuity and almost completely dismiss current narrow AI systems. But this seems like black-and-white thinking to me. I don’t think that current narrow AI systems are very similar to your hypothetical superintelligent tools. But I also don’t think that it is warranted to dismiss the possibility that we will arrive at those superintelligent tools by incremental improvements of our current systems.
What I am trying to communicate is that it seems much more important to me to technically define at what point you believe tools to turn into agents, rather than using it as a premise for speculative scenarios.
Another point I would like to make is that researching how to create the kind of tool AI you have in mind, and speculating about its failure modes, are completely intervened problems. It seems futile to come up with vague scenarios of how these completely undefined systems might fail, and to expect to gain valuable insights from these speculations.
I also think that it would make sense to talk about this with experts outside of your social circles. Do they believe that your speculations are worthwhile at this point in time? If not, why not?
I also think that it would make sense to talk about this with experts outside of your social circles. Do they believe that your speculations are worthwhile at this point in time?
That’s exactly what the plan is now: I think I have enough technical results that I can start talking to the AI and AGI designers.
I’m curious—who are the AI and AGI designers- seeing one hasn’t been publicly built yet. Or is this other researchers in the AGI field. If you are looking for feedback from a technical though not academic, I’d be very interested in assisting.
There are a half-dozen AGI projects with working implementations. There are multiple annual conferences where people working on AGI share their results. There’s literature on the subject going back decades, really to the birth of AI in the 50′s and 60′s. The term AGI itself was coined by people working in this field to describe what they are building. Maybe you mean something different than AGI when say “one hasn’t been publicly built yet” ?
There seems to be some serious miscommunication going on here. By “AGI”, do you mean a being capable of a wide variety of cognitive tasks, including passing the Turing Test? By “AGI project”, do you mean an actual AGI, and not just a project with AGI as its goal? By “working implementation”, do you mean actually achieving AGI, or just achieving some milestone on the way?
I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.
Not sure yet—taking advice. The AI people are narrow AI developers, and the AGI people are those that are actually planning to build an AGI (eg Ben Goertzl).
For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
AFAICT, tool AIs are passive, and agents are active. That is , the default state of tool AI is to do nothing. If one gives a tool AI the instruction “do (some finite ) x and stop” one would not expect the AI to create subagents with goal x, because that would disobey the “and stop”.
with goal x, because that would disobey the “and stop”.
I think you are pointing out that it is possible to create tools with a simple-enough, finite-enough, not-self-coding enough program so they will reliably not become agents.
And indeed, we have plenty of experience with tools that do not become agents (hammers, digital watches, repair manuals, contact management software, compilers).
The question really is is there a level of complexity that on its face does not appear to be AI but would wind up seeming agenty? Could you write a medical diagnostic tool that was adaptive and find one day that it was systematically installing sewage treatment systems in areas with water-borne diseases, or even agentier, building libraries and schools?
If consciousness is an emergent phenomenon, and if consciousness and agentiness are closely related (I think they are at least similar and probably related), then it seems at least plausible AI could arise from more and more complex tools with more and more recursive self-coding.
It would be helpful in understanding this if we had the first idea how consciousness or agentiness arose in life.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper
Clipping, because the agent will not stop.
At what point do tools start to become agents? In other words, what are the defining characteristics of tools that become agents? How do you imagine the development of tool AI: (1) each generation is incrementally more prone to become an agent (2) tools start to become agents after invention X or (3) there will be be no incremental development leading up to it at all but rather a sudden breakthrough?
Seems like X is (or includes) the ability to think about self-modification: awareness of its own internal details and modelling their possible changes.
Note that without this ability the tool could invent a plan which leads to its own accidental destruction (and possibly not completing the plan), because it does not realize it could be destroyed or damaged.
An agent can also accidentally pursue a plan which leads to its self-destruction. People do it now and then by not modelling the world well enough.
I think of agents having goals and pursuing them by default. I dont see how self reflexive abilities.… ” think about self-modification: awareness of its own internal details and modelling their possible changes.”...add up to goals. It might be intuitive that a self aware entity would want to preserve its existence...but that intuition could be driven by anthropomorphism, (or zoomorphism , or biomorphism)
With self-reflective abilities, the system can also consider paths including self-modification in reaching its goal. Some of those paths may be highly unintuitive for humans, so we wouldn’t notice some possible dangers. Self-modification may also remove some safety mechanisms.
A system that explores many paths can find a solutions humans woudln’t notice. Such “creativity” at object level is relatively harmless. Google Maps may find you a more efficient path to your work than the one you use now, but that’s okay. Maybe the path is wrong for some reasons that Google Maps does not understand (e.g. it leads through a neighborhood with high crime), but at least on general level you understand that such is the risk of following the outputs blindly. However, similar “creativity” at self-modification level can have unexpected serious consequences.
“the system can also”, “some of those paths may be”, “may also remove”. Those are some highly conditional statements. Quantify, please, or else this is no different than “the LHC may destroy us all with a mini black hole!”
I’d need to have a specific description of the system, what exactly it can do, and how exactly it can modify itself, to give you a specific example of self-modification that contributes to the specific goal in a perverse way.
I can invent an example, but then you can just say “okay, I wouldn’t use that specific system”.
As an example: Imagine that you have a machine with two modules (whatever they are) called Module-A and Module-B. Module-A is only useful for solving Type-A problems. Module-B is only useful for solving Type-B problems. At this moment, you have a Type-A problem, and you ask the machine to solve it as cheaply as possible. The machine has no Type-B problem at the moment. So the machine decides to sell its Module-B on ebay, because it is not necessary now, and the gained money will reduce the total cost of solving your problem. This is short-sighted, because tomorrow you may need to solve a Type-B problem. But the machine does not predict your future wishes.
But can’t you see, that’s entirely the point!
If you design systems whereby the Scary Idea has no more than a vanishing likelihood of occurring, it no longer becomes an active concern. It’s like saying “bridges won’t survive earthquakes! you are crazy and irresponsible to build a bridge in an area with earthquakes!” And then I design a bridge that can survive earthquakes smaller than magnitude X, where X magnitude earthquakes have a likelihood of occurring less than 1 in 10,000 years, then on top of that throw an extra safety margin of 20% on because we have the extra steel available. Now how crazy and irresponsible is it?
Yeah, and the whole problem is how specifically will you do it.
If I (or anyone else) will give you examples of what could go wrong, of course you can keep answering by “then I obviously wouldn’t use that design”. But at the end of the day, if you are going to build an AI, you have to make some design—just refusing designs given by other people will not do the job.
There are plenty of perfectly good designs out there, e.g. CogPrime + GOLUM. You could be calculating probabilistic risk based on these designs, rather than fear mongering based on a naïve Bayes net optimizer.
That’s a complicated and interesting question, that quite a few smart people have been thinking about. Fortunately, I don’t need to solve it to get the point above.
And also: Question-answerer->tool->agent is a natural progression just in process automation. (And this is why they’re called “daemons”.)
I’m suspecting “tool” versus “agent” is a magical category whose use is really talking about the person using it.
Thanks, that’s another good point!
I think the concepts are clear at the extremes, but they tend to get muddled in the middle.
Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?
I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?
And if these distinctions don’t make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?
A cat’s an agent. It has goals it works towards. I’ve seen cats manifest creativity that surprised me.
Why is that surprising? Does anyone think that “agent” implies human level intelligence?
Both your examples are agents currently. A calculator is a tool.
Anyway, I’ve still got a lot more work to do before I seriously discuss this issue.
I’d be especially interested in edge cases. Is e.g. Google’s driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a “general intelligence module” make Google’s driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
A driverless car is firmly is on the agent side of the fence, by my defintions. Feel free to state your own, anybody.
It would, however, be interesting to. This discussion has come around before. What I said there:
A high-frequency trading system seems no more complex or agenty to me than rigging a shotgun to shoot at a door when someone opens the door from the outside. Am I wrong about this?
To be clear, what I think I know about high-frequency trading systems is that through technology they are able to front run certain orders they see to other exchanges when these orders are being sent to multiple exchanges in a non-simultaneous way. The thing that makes them unfriendly is that they are designed by people who understand order dynamics at the microsecond level to exploit people who trade lots of stock but don’t understand the technicalities of order dynamics. That market makers are allowed to profit by selling information flow to high-frequency traders that, on examination, allows them to subvert the stated goals of a “fair” market is all part of the unfriendliness.
But high-frequency programs execute pretty simple instructions quite repeatably, they are not adaptive in a general sense or even particularly complex, they are mostly just fast.
Mmm … I think we’re arguing definitions of ill-defined categories at this point. Sort of “it’s not an AI if I understand it.” I was using it as an example of a “daemon” in the computing sense, a tool trusted to run without further human intervention—not something agenty.
Intentionality meaning ” “the power of minds to be about, to represent, or to stand for, things, properties and states of affairs”, …or intentionally meaning purpose?
How do you decide at what point your grasp of a hypothetical system is sufficient, and the probability that it will be build large and robust enough, for it to make sense to start thinking about hypothetical failure modes?
? Explain. I can certainly come up with two hypothetical AI designs, call one a tool and the other an agent (and expect that almost everyone would agree with this, because tool vs agent is clearer at the extremities than in the middle), set up a toy situation, and note that the tools top plan is to make itself into the agent design. The “tool wants to be agent” is certainly true, in this toy setup.
The real question is how much this toy example generalises to real-world scenarios, which is a much longer project. Daniel Dewey has been doing some general work in that area.
My perception, possibly misperception, is that you are too focused on vague hypotheticals. I believe that it is not unlikely that future tool AI will be based on, or be inspired by (at least partly), previous generations of tool AI that did not turn themselves into agent AIs. I further believe that, instead of speculating about specific failure modes, it would be fruitful to research whether we should expect some sort of black swan event in the development of these systems.
I think the idea around here is to expect a strong discontinuity and almost completely dismiss current narrow AI systems. But this seems like black-and-white thinking to me. I don’t think that current narrow AI systems are very similar to your hypothetical superintelligent tools. But I also don’t think that it is warranted to dismiss the possibility that we will arrive at those superintelligent tools by incremental improvements of our current systems.
What I am trying to communicate is that it seems much more important to me to technically define at what point you believe tools to turn into agents, rather than using it as a premise for speculative scenarios.
Another point I would like to make is that researching how to create the kind of tool AI you have in mind, and speculating about its failure modes, are completely intervened problems. It seems futile to come up with vague scenarios of how these completely undefined systems might fail, and to expect to gain valuable insights from these speculations.
I also think that it would make sense to talk about this with experts outside of your social circles. Do they believe that your speculations are worthwhile at this point in time? If not, why not?
Just because I haven’t posted on this, doesn’t mean I haven’t been working on it :-) but the work is not yet ready.
That’s exactly what the plan is now: I think I have enough technical results that I can start talking to the AI and AGI designers.
I’m curious—who are the AI and AGI designers- seeing one hasn’t been publicly built yet. Or is this other researchers in the AGI field. If you are looking for feedback from a technical though not academic, I’d be very interested in assisting.
There are a half-dozen AGI projects with working implementations. There are multiple annual conferences where people working on AGI share their results. There’s literature on the subject going back decades, really to the birth of AI in the 50′s and 60′s. The term AGI itself was coined by people working in this field to describe what they are building. Maybe you mean something different than AGI when say “one hasn’t been publicly built yet” ?
There seems to be some serious miscommunication going on here. By “AGI”, do you mean a being capable of a wide variety of cognitive tasks, including passing the Turing Test? By “AGI project”, do you mean an actual AGI, and not just a project with AGI as its goal? By “working implementation”, do you mean actually achieving AGI, or just achieving some milestone on the way?
I meant Artificial General Intelligence as that term has been first coined and used in the AI community: the ability to adapt to any new environment or task.
Google’s machine learning algorithms can not just correctly classify videos of cats, but can innovate the concept of a cat given a library of images extracted from video content, and no prior knowledge or supervisory feedback.
Roomba interacts with its environment to build a virtual model of my apartment, and uses that acquired knowledge to efficiently vacuum my floors while improvising in the face of unexpected obstacles like a 8mo baby or my cat.
These are both prime examples of applied AI in the marketplace today. But ask Google’s neural net to vacuum my floor, or a Roomba to point out videos of cats on the internet and … well the hypothetical doesn’t even make sense—there is an inferential gap here that can’t be crossed as the software is incapable of adapting itself.
A software program which can make changes to its own source code—either by introspection or random mutation—can eventually adapt to whatever new environment or goal is presented to it (so long as the search process doesn’t get stuck on local maxima, but that’s a software engineering problem). Such software is Artificial General Intelligence, AGI.
OpenCog right now has a rather advanced evolutionary search over program space at its core. On youtube you can find some cool videos of OpenCog agents learning and accomplishing arbitrary goals in unstructured virtual environments. Because of the unconstrained evolutionary search over program space, this is technically an AGI. You could put it in any environment with any effectors and any goal and eventually it would figure out both how that goal maps to the environment and how to accomplish it. CogPrime, the theoretical architecture OpenCog is moving towards, is “merely” an addition of many, many other special-purpose memory and heuristic components which both speed the process along and make the agent’s thinking process more human-like.
Notice there is nothing in here about the Turing test, nor should there be. Nor is there any requirement that the intelligence be human-level in any way, just that it could be given enough processing power and time. Such intelligences already exist.
“Pass the Turing Test” is a goal, and is therefore a subset of GI. The Wikipedia article says “Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.”
Your claim that OpenCog can “eventually” accomplish any task is unsupported, is not something that has been “implemented”, and is not what is generally understood as what AGI refers to.
That quote describes what a general intelligence can do, not what it is. And you can’t extract the Turing test from it. A general intelligence might perform tasks better but in a different way which distinguishes it from a human.
I explained quite well how OpenCog’s use of MOSES—already implemented—to search program space achieves universality. It is your claim that OpenCog can’t accomplish (certain?) tasks that is unsupported. Care to explain?
Don’t argue about, it, put openCog up for a .TT.
That wouldn’t prove anything, because the Turing test doesn’t prove anything… A general intelligence might perform tasks better but in a different way which distinguishes it from a human, thereby making the Turing test not a useful test of general intelligence..
You’re assuming chatting is not a task.
.NL is also a pre requisite for a wide range of other tasks: an entity that lacks it will not be able to write books or tell jokes.
It seems as though you have trivialised the “general” into “able to do whatever it can do, but not able to do anything else”.
Eh, “chatting in such a way as to successfully masquerade as a human against a panel of trained judges” is a very, very difficult task. Likely more difficult than “develop molecular nanotechnology” or other tasks that might be given to a seed stage or oracle AGI. So while a general intelligence should be able to pass the Turing test—eventually! -- I would be very suspicious if it came before other milestones which are really what we are seeking an AGI to do.
Chatting may be difficult, but it is needed to fulfill the official definition of aAGI.
Your comments amount to having a different definition of AGI.
Can you list the 6 working AGI projects—I’d be interested but I suspect we are talking about different things.
OpenCog, NARS, LIDA, Soar, ACT-R, MicroPsi. More:
http://wiki.opencog.org/w/AGI_Projects http://bicasociety.org/cogarch/architectures.htm
Not sure yet—taking advice. The AI people are narrow AI developers, and the AGI people are those that are actually planning to build an AGI (eg Ben Goertzl).
For a very different perspective from both narrow AI and to a lesser extent Goertzel*, you might want to contact Pat Langley. He is taking a Good Old-Fashioned approach to Artificial General Intelligence:
http://www.isle.org/~langley/
His competing AGI conference series:
http://www.cogsys.org/
Goertzel probably approves of all the work Langley does; certainly the reasoning engine of OpenCog is similarly structured. But unlike Langley the OpenCog team thinks there isn’t one true path to human-level intelligence, GOFAI or otherwise.
EDIT: Not that I think you shouldn’t be talking to Goertzel! In fact I think his CogPrime architecture is the only fully fleshed out AGI design which as specified could reach and surpass human intelligence, and the GOLUM meta-AGI architecture is the only FAI design I know of. My only critique is that certain aspects of it are cutting corners, e.g. the rule-based PLN probabilistic reasoning engine vs an actual Bayes net updating engine a la Pearl et al.
Thanks!
It would be helpful if you spelt out your toy situation, since my intuition are currently running in the opposite direction.
AFAICT, tool AIs are passive, and agents are active. That is , the default state of tool AI is to do nothing. If one gives a tool AI the instruction “do (some finite ) x and stop” one would not expect the AI to create subagents with goal x, because that would disobey the “and stop”.
I think you are pointing out that it is possible to create tools with a simple-enough, finite-enough, not-self-coding enough program so they will reliably not become agents.
And indeed, we have plenty of experience with tools that do not become agents (hammers, digital watches, repair manuals, contact management software, compilers).
The question really is is there a level of complexity that on its face does not appear to be AI but would wind up seeming agenty? Could you write a medical diagnostic tool that was adaptive and find one day that it was systematically installing sewage treatment systems in areas with water-borne diseases, or even agentier, building libraries and schools?
If consciousness is an emergent phenomenon, and if consciousness and agentiness are closely related (I think they are at least similar and probably related), then it seems at least plausible AI could arise from more and more complex tools with more and more recursive self-coding.
It would be helpful in understanding this if we had the first idea how consciousness or agentiness arose in life.
I’m pointing out that tool AI, as I have defined it will not turn itself into agentve AI [except] by malfunction, ie its relatively safe.
“and stop your current algorithm” is not the same as “and ensure your hardware and software have minimised impact in the future”.
What does the latter mean? Self destruct in case anyone misuses you?
I’m pointing out that “suggest a plan and stop” does not prevent the tool from suggesting a plan that turns itself into an agent.
My intention was that the X is stipulated by a human.
If you instruct a tool AI to make a million paperclips and stop, it won’t turn itself into an agent with a stable goal of paper Clipping, because the agent will not stop.
Yes, if the reduced impact problem is solved, then a reduced impact AI will have a reduced impact. That’s not all that helpful, though.
I don’t see what needs solving. I f you ask Google maps the way to Tunbridge Wells, it doesn’t give you the route to Timbuctu.