Too bad we can’t judge Friendly AI charity effectiveness as “easily” as we can judge the effectiveness of some other charities, like those which distribute malaria nets and vaccines.
If one assumes that giving toward solving the Friendly AI problem offers the highest marginal return on investment, which project do you give to? Yudkowsky / SIAI? OpenCog / Goertzel? Gert-Jan Lokhorst? Stan Franklin / Wendell Wallach / Colin Allen?
My money is on SIAI, but I can’t justify that with anything quick and easy.
As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added “consultation with SIAI” in the latter part of OpenCog’s roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
At the moment, they seem more interested in lobbing stink bombs in the general direction of the rest of the AI community—perhaps in the hope that it will drive some of the people there in its direction. Claiming that your opponents’ products may destroy the world is surely a classic piece of FUD marketing.
The “friendlier-than-thou” marketing battle seems to be starting out with some mud-slinging.
I don’t know much about AI specifically, but I do know something about software in general. And I’d say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today’s software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools. The way software is currently done is simply too brittle and unscalable to allow for a project of such size and complexity, and nobody really knows when and how (if at all) this state of affairs will be improved.
With this in mind, I simply can’t take seriously people who propose a few years long roadmap for building an AGI.
And I’d say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today’s software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools.
This is the sort of sentiment that has people predict that AGI will be built in 300 years, because “300 years” is how difficult the problem feels like. There is a lot of uncertainty about what it takes to build an AGI, and it would be wrong to be confident one way or the other just how difficult that’s going to be, or what tools are necessary.
We understand both airplanes and Neolithic tools, but we don’t understand AGI design. Difficulty in basic understanding doesn’t straightforwardly translate into the difficulty of solution.
We understand both airplanes and Neolithic tools, but we don’t understand AGI design. Difficulty in basic understanding doesn’t straightforwardly translate into the difficulty of solution.
That is true, but a project like OpenCog can succeed only if: (1) there exists an AGI program simple enough (in terms of both size and messiness) to be doable with today’s software technology, and (2) people running the project have the right idea how to build it. I find both these assumptions improbable, especially the latter, and their conjunction vanishingly unlikely.
Perhaps a better analogy would be if someone embarked on a project to find an elementary proof of P != NP or some such problem. We don’t know for sure that it’s impossible, but given both the apparent difficulty of the problem and the history of the attempts to solve it, such an announcement would be rightfully met with skepticism.
You appealed to inadequacy of “today’s software tools and methodologies”. Now you make a different argument. I didn’t say it’s probable that solution will be found (given the various difficulties), I said that you can’t be sure that it’s Neolithic tools in particular that are inadequate.
It’s hard to find a perfect analogy here, but both analogies I mentioned lend support to my original claim in a similar way.
It may be that with the present state of math, one could cite a few established results and use them to construct a simple proof of P != NP, only nobody’s figured it out yet. Analogously, it may be that there is a feasible way to take present-day software tools and use them to implement a working AGI. In both cases, we lack the understanding that would be necessary either to achieve the goal or to prove it impossible. However, what insight and practical experience we have strongly suggests that neither thing is doable, leading to conclusion that the present-day software tools likely are inadequate.
In addition to this argument, we can also observe that even if such a solution exists, finding it would be a task of enormous difficulty, possibly beyond anyone’s practical abilities.
This reasoning doesn’t lead to the same certainty that we have in problems involving well-understood physics, such as building airplanes, but I do think it’s sufficient (when spelled out in full detail) to establish a very high level of certainty nevertheless.
Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
What magical software engineering tools are you after that can’t be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve—people will just go “oh, that can’t be done right now, let’s just wait a while until the tools appear that make AGI like snapping lego together”. Which is fine if you want to leave the R&D to other people… like us.
Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
The roadmap on opencog.org has among its milestones: “2019-2021: Full-On Human Level AGI.”
What magical software engineering tools are you after that can’t be built with the current tools we have?
Well, if I knew, I’d be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
As in “extraordinary claims demand extraordinary evidence”.
A summary of the evidence can be found on Ben’s blog
Adding some more info…
Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn’t constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn’t require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It’s not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
Too bad we can’t judge Friendly AI charity effectiveness as “easily” as we can judge the effectiveness of some other charities, like those which distribute malaria nets and vaccines.
If one assumes that giving toward solving the Friendly AI problem offers the highest marginal return on investment, which project do you give to? Yudkowsky / SIAI? OpenCog / Goertzel? Gert-Jan Lokhorst? Stan Franklin / Wendell Wallach / Colin Allen?
My money is on SIAI, but I can’t justify that with anything quick and easy.
As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added “consultation with SIAI” in the latter part of OpenCog’s roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I’m aware there is no software development going on in SIAI, it’s all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven’t heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
At the moment, they seem more interested in lobbing stink bombs in the general direction of the rest of the AI community—perhaps in the hope that it will drive some of the people there in its direction. Claiming that your opponents’ products may destroy the world is surely a classic piece of FUD marketing.
The “friendlier-than-thou” marketing battle seems to be starting out with some mud-slinging.
I don’t know much about AI specifically, but I do know something about software in general. And I’d say that even if someone had a correct general idea how to build an AGI (an assumption that by itself beggars belief given the current state of the relevant science), developing an actual working implementation with today’s software tools and methodologies would be sort of like trying to build a working airplane with Neolithic tools. The way software is currently done is simply too brittle and unscalable to allow for a project of such size and complexity, and nobody really knows when and how (if at all) this state of affairs will be improved.
With this in mind, I simply can’t take seriously people who propose a few years long roadmap for building an AGI.
This is the sort of sentiment that has people predict that AGI will be built in 300 years, because “300 years” is how difficult the problem feels like. There is a lot of uncertainty about what it takes to build an AGI, and it would be wrong to be confident one way or the other just how difficult that’s going to be, or what tools are necessary.
We understand both airplanes and Neolithic tools, but we don’t understand AGI design. Difficulty in basic understanding doesn’t straightforwardly translate into the difficulty of solution.
That is true, but a project like OpenCog can succeed only if: (1) there exists an AGI program simple enough (in terms of both size and messiness) to be doable with today’s software technology, and (2) people running the project have the right idea how to build it. I find both these assumptions improbable, especially the latter, and their conjunction vanishingly unlikely.
Perhaps a better analogy would be if someone embarked on a project to find an elementary proof of P != NP or some such problem. We don’t know for sure that it’s impossible, but given both the apparent difficulty of the problem and the history of the attempts to solve it, such an announcement would be rightfully met with skepticism.
You appealed to inadequacy of “today’s software tools and methodologies”. Now you make a different argument. I didn’t say it’s probable that solution will be found (given the various difficulties), I said that you can’t be sure that it’s Neolithic tools in particular that are inadequate.
It’s hard to find a perfect analogy here, but both analogies I mentioned lend support to my original claim in a similar way.
It may be that with the present state of math, one could cite a few established results and use them to construct a simple proof of P != NP, only nobody’s figured it out yet. Analogously, it may be that there is a feasible way to take present-day software tools and use them to implement a working AGI. In both cases, we lack the understanding that would be necessary either to achieve the goal or to prove it impossible. However, what insight and practical experience we have strongly suggests that neither thing is doable, leading to conclusion that the present-day software tools likely are inadequate.
In addition to this argument, we can also observe that even if such a solution exists, finding it would be a task of enormous difficulty, possibly beyond anyone’s practical abilities.
This reasoning doesn’t lead to the same certainty that we have in problems involving well-understood physics, such as building airplanes, but I do think it’s sufficient (when spelled out in full detail) to establish a very high level of certainty nevertheless.
Well, if you bothered looking at our/OpenCog’s roadmap you’ll see it doesn’t expect AGI in a “few years”.
What magical software engineering tools are you after that can’t be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve—people will just go “oh, that can’t be done right now, let’s just wait a while until the tools appear that make AGI like snapping lego together”. Which is fine if you want to leave the R&D to other people… like us.
ferrouswheel:
The roadmap on opencog.org has among its milestones: “2019-2021: Full-On Human Level AGI.”
Well, if I knew, I’d be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
As in “extraordinary claims demand extraordinary evidence”.
A summary of the evidence can be found on Ben’s blog
Adding some more info… Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn’t constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn’t require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It’s not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).
Not particularly, people have been claiming a decade from human-level intelligence since the dawn of the AI field, why should now be any different? ;p
And usually people would consider a decade being more than a “few years”—which was sort of my point.
Eyeballing my own graph I give it about a 12% chance of being true. Ambitious—but not that extraordinary.
People are usually overoptimistic about the timescales of their own projects. It is typically an attempt to signal optimism and confidence.
We agree on that point, I just didn’t have the balls to say it. :)
The OpenCog Roadmap does say that they will collaborate with SIAI at some point: