[LINK] Engineering General Intelligence (the OpenCog/CogPrime book)
Ben Goertzel has made available a pre-print copy of his book Engineering General Intelligence (Vol1, Vol2). The first volume is basically the OpenCog organization’s roadmap to AGI, and the second volume a 700 page overview of the design.
- 3 Jan 2015 6:37 UTC; 4 points) 's comment on MIRI’s technical research agenda by (
- 14 Mar 2015 16:43 UTC; 2 points) 's comment on Open thread, Mar. 9 - Mar. 15, 2015 by (
- 3 Jan 2015 8:37 UTC; 1 point) 's comment on MIRI’s technical research agenda by (
From the book:
The OpenCogPrime roadmap from the opencog wiki:
Can you give or direct me to more Cliff Note Summary versions of AGI research? I’d love to contribute to OpenCog as a (non-computer) scientist and I wonder if there’s anything I could help with. Am I right in guessing the code is about stuff like this?
No. I just excerpted this part because a) I thought it summarizes the key phases well and b) I’m interested in this kind of approach (I see lots of parallels between machine learning meta strategies and human learning and education.
Any opinions on where Goertzel’s stuff stands in relation to whatever there is that passes for state of the art in AGI research?
And is it even worth trying to have this conversation on LW? We don’t seem to see much anything here about AI that actually does stuff that’s being worked on right now (Google cars, IBM Watson, DeepMind etc) beyond what you can read in a press release. Is all of the interesting stuff proprietary, so we don’t get bored grad students coming here chatting about it, or is there an understanding with the people involved with actual AI research that LW and MIRI are not worth bothering with?
Depends on how you dereference “AGI research”. The term was invented by Goertzel et al to describe what OpenCog is, so at least from that standpoint it is very relevant. Stepping back, among people who actually bother to make the AI/AGI distinction, OpenCog is definately one giant influential project in this relatively small field. It’s not a monoculture community though, and there are other influential AGI projects with very different designs. But OpenCog is cerntainly a heavy-weight contender.
Of course there is also the groups which don’t make the AI/AGI distinction, such as most of the machine learning & perception crowds, and Kurzweil et al. These people think they can achieve general intelligence through layering narrow AI techniques or direct emulation, and probably think very little of integrative methods pursued by Goertzel.
Can you elaborate? I’m not sure I understand the question. Why wouldn’t this be a great place to discuss AGI?
Because LW has been around for 5 or so years, and I’ve remember seeing very little nuts and bolts AI discussion at the level of, say, Starglider’s AI Mini-FAQ happen here, very few discussion about deep technical details of something like IBM’s recent AI work, whatever goes on at DeepMind and things like that. Of course there are going to be trade secrets involved, but beyond pretty much just AIXI, I don’t even see much ambient awareness about whatever publicly known technical methods there are that the companies are probably basing their stuff on. It’s as if the industry was busy fielding automobiles, biplanes and tanks while the majority at LW still had trouble figuring out the basic concepts of steam power.
LW can discuss the philosophy part, but I don’t see much capability around that could go actually look through Goertzel’s design and go “this thing looks like a non-starter because recognized technical problem X”, “this thing resembles successful design Y, it’s probably worth studying more closely” or “this thing has a really novel and interesting attack for known technical problem Z, even if the rest is junk that part definitely needs close studying” for instance. And I don’t think the philosophy is going to stay afloat for very long if it’s practitioners aren’t able to follow the technical details of what people are actually doing in the domain they’d like to philosophize about.
I was going to respond with a biting “well then what the heck is the point of LW?” post, but I think you got the point:
Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\
EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?
I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.
Michael Wilson, looks like.
My goal is to enact a positive singularity. To that end I’m not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.
I had thought that a community with a tight focus on ‘friendly AGI’ would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don’t think it’s worth my time to correct this mistake.
Oh really? :-D
As far as DeepMind goes, Jaan Tallinn was involved in it and is one of the biggest donors to MIRI.
If I look at the participant list of MIRI workshops they always had a person from Google in attendance.