As my name has come up in this thread I thought I’d briefly chime in. I do believe it’s reasonably likely that a human-level AGI could be created in a period of, let’s say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don’t claim any kind of certitude about this, it’s just my best judgment at the moment.
So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for the work. A prior project of mine, Webmind, was well-funded for a brief period, but my Novamente project (http://novamente.net) never has been, and nor is OpenCogPrime … yet.
Whether others involved in OpenCogPrime work agree closely with my predictive estimates is really beside the point to me: some agree more closely than others. We are involved in doing technical research and engineering work according to a well-defined plan (aimed explicitly at AGI at the human level and beyond), and the important thing is knowing what needs to be done, not knowing exactly how long it will take. (If I found out my time estimate were off by a factor of 5, I’d still consider the work roughly equally worthwhile. If I found out it were off by a factor of 10, that would give me pause, and I would serious consider devoting my efforts to developing some sort of brain scanning technology, or quantum computing hardware, or to developing some totally different sort of AGI design).
I do not have a mathematical proof that the OpenCogPrime design will work for human-level AGI at all, nor a rigorous calculation to support my time-estimate. I have discussed the relevant issues with many smart, knowledgeable people, but ultimately, as with any cutting-edge research project, there is a lot of uncertainty here.
I really do not think that my subjective estimate about the viability of the OpenCogPrime AGI design is based on any kind of simple cognitive error. It could be a mistake, but it’s not a naive or stupid mistake!
In order to effectively verify or dispute my hypothesis that the OpenCogPrime design (or the Novamente Cognition Engine design: they’re similar but not identical) is adequate for human-level AGI, with a reasonable level of certitude, Manhattan Project level funding would not be required. US $10M per year for a decade would be ample; and if things were done very carefully without too much bad luck, we might be able to move the project full-speed-ahead on as little as US $1.5 M per year, and achieve amazing results within as little as 3 years.
Hell, we might be able to get to the end goal without ANY funding, based on the volunteer efforts of open-source AI developers, though this seems a particularly difficult path, and I think the best course will be to complement these much-valued volunteer efforts with funded effort.
Anyway, a number of us are working actively on the OpenCogPrime project now (some funded by SIAI, some by Novamente LLC, some as volunteers) even without an overall “adequate” level of funding, and we’re making real progress, though not as much as we’d like.
Regarding my role with SIAI: as Eliezer stated in this thread, he and I have not been working closely together so far. I was invited into SIAI to, roughly speaking, develop a separate AGI research programme which complements Eliezer’s but is still copacetic with SIAI’s overall mission. So far the main thing I have done in this regard is to develop the open-source OpenCog (http://opencog.org) AGI sofware project of which OpenCogPrime is a subset.
As my name has come up in this thread I thought I’d briefly chime in. I do believe it’s reasonably likely that a human-level AGI could be created in a period of, let’s say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don’t claim any kind of certitude about this, it’s just my best judgment at the moment.
So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for the work. A prior project of mine, Webmind, was well-funded for a brief period, but my Novamente project (http://novamente.net) never has been, and nor is OpenCogPrime … yet.
Whether others involved in OpenCogPrime work agree closely with my predictive estimates is really beside the point to me: some agree more closely than others. We are involved in doing technical research and engineering work according to a well-defined plan (aimed explicitly at AGI at the human level and beyond), and the important thing is knowing what needs to be done, not knowing exactly how long it will take. (If I found out my time estimate were off by a factor of 5, I’d still consider the work roughly equally worthwhile. If I found out it were off by a factor of 10, that would give me pause, and I would serious consider devoting my efforts to developing some sort of brain scanning technology, or quantum computing hardware, or to developing some totally different sort of AGI design).
I do not have a mathematical proof that the OpenCogPrime design will work for human-level AGI at all, nor a rigorous calculation to support my time-estimate. I have discussed the relevant issues with many smart, knowledgeable people, but ultimately, as with any cutting-edge research project, there is a lot of uncertainty here.
I really do not think that my subjective estimate about the viability of the OpenCogPrime AGI design is based on any kind of simple cognitive error. It could be a mistake, but it’s not a naive or stupid mistake!
In order to effectively verify or dispute my hypothesis that the OpenCogPrime design (or the Novamente Cognition Engine design: they’re similar but not identical) is adequate for human-level AGI, with a reasonable level of certitude, Manhattan Project level funding would not be required. US $10M per year for a decade would be ample; and if things were done very carefully without too much bad luck, we might be able to move the project full-speed-ahead on as little as US $1.5 M per year, and achieve amazing results within as little as 3 years.
Hell, we might be able to get to the end goal without ANY funding, based on the volunteer efforts of open-source AI developers, though this seems a particularly difficult path, and I think the best course will be to complement these much-valued volunteer efforts with funded effort.
Anyway, a number of us are working actively on the OpenCogPrime project now (some funded by SIAI, some by Novamente LLC, some as volunteers) even without an overall “adequate” level of funding, and we’re making real progress, though not as much as we’d like.
Regarding my role with SIAI: as Eliezer stated in this thread, he and I have not been working closely together so far. I was invited into SIAI to, roughly speaking, develop a separate AGI research programme which complements Eliezer’s but is still copacetic with SIAI’s overall mission. So far the main thing I have done in this regard is to develop the open-source OpenCog (http://opencog.org) AGI sofware project of which OpenCogPrime is a subset.