Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
What magical software engineering tools are you after that can’t be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve—people will just go “oh, that can’t be done right now, let’s just wait a while until the tools appear that make AGI like snapping lego together”. Which is fine if you want to leave the R&D to other people… like us.
Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
The roadmap on opencog.org has among its milestones: “2019-2021: Full-On Human Level AGI.”
What magical software engineering tools are you after that can’t be built with the current tools we have?
Well, if I knew, I’d be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
As in “extraordinary claims demand extraordinary evidence”.
A summary of the evidence can be found on Ben’s blog
Adding some more info…
Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn’t constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn’t require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It’s not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
What magical software engineering tools are you after that can’t be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve—people will just go “oh, that can’t be done right now, let’s just wait a while until the tools appear that make AGI like snapping lego together”. Which is fine if you want to leave the R&D to other people… like us.
ferrouswheel:
The roadmap on opencog.org has among its milestones: “2019-2021: Full-On Human Level AGI.”
Well, if I knew, I’d be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
As in “extraordinary claims demand extraordinary evidence”.
A summary of the evidence can be found on Ben’s blog
Adding some more info… Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn’t constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn’t require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It’s not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
Not particularly, people have been claiming a decade from human-level intelligence since the dawn of the AI field, why should now be any different? ;p
And usually people would consider a decade being more than a “few years”—which was sort of my point.
Eyeballing my own graph I give it about a 12% chance of being true. Ambitious—but not that extraordinary.
People are usually overoptimistic about the timescales of their own projects. It is typically an attempt to signal optimism and confidence.