Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don’t actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI. “Busy developing and researching” doesn’t look very promising from the outside, considering how many other groups present themselves the same way.
Ben’s publishing several books (well, he’s already published several, but he’s publishing the already written “Building Better Minds” early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I’ll be writing a “practical” guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I’m not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI.
Evolution managed to do that without any capacity for having insights. It’s not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just “success” is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.
Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don’t actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don’t want to discuss their insights “ad nauseum”, I need some indication that they have new insights. Otherwise they won’t be able to build AI. “Busy developing and researching” doesn’t look very promising from the outside, considering how many other groups present themselves the same way.
Ben’s publishing several books (well, he’s already published several, but he’s publishing the already written “Building Better Minds” early 2012 and a pop sci version shortly there after which are more current regarding OpenCog). I’ll be writing a “practical” guide to OpenCog once we reach our 1.0 release at the end of 2012.
Ben actually does quite a lot of writing, theorizing and conferences. Whereas myself and a number of others are more concerned with the software development side of things.
We also have a wiki: http://wiki.opencog.org
What new insights are there?
Well new is relative… so without any familiarity of your knowledge on OpenCog I can’t comment.
New insights relative to the current state of academia. Many of us here are up-to-date with the relevant areas (or trying to be). I’m not sure what my knowledge of OpenCog has to do with anything, as I was asking for the benefit of all onlookers too.
Evolution managed to do that without any capacity for having insights. It’s not out of the question that enough hard work without much understanding would suffice, particularly if you use the tools of mainstream AI (machine learning).
Also, just “success” is not something one would wish to support (success at exterminating humanity, say, is distinct from success in exterminating polio), so the query about which institution is more likely to succeed is seriously incomplete.