How do I tell whether a small group doing secret research will be better or worse at saving the world than the global science/military complex? Does anyone have strong arguments either way?
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”. I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.
With Siri, Apple is using the results of over 40 years of research funded by DARPA via SRI International’s Artificial Intelligence Center through the Personalized Assistant that Learns Program and Cognitive Agent that Learns and Organizes Program CALO.
When a question is put to Watson, more than 100 algorithms analyze the question in different ways, and find many different plausible answers–all at the same time. Yet another set of algorithms ranks the answers and gives them a score. For each possible answer, Watson finds evidence that may support or refute that answer. So for each of hundreds of possible answers it finds hundreds of bits of evidence and then with hundreds of algorithms scores the degree to which the evidence supports the answer. The answer with the best evidence assessment will earn the most confidence. The highest-ranking answer becomes the answer. However, during a Jeopardy! game, if the highest-ranking possible answer isn’t rated high enough to give Watson enough confidence, Watson decides not to buzz in and risk losing money if it’s wrong. The Watson computer does all of this in about three seconds.
It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?
The basement approach seems ridiculous given the above.
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”.
I didn’t mean to endorse that. What I was thinking when I wrote “hire the most promising AI researchers to do research in secret” was that if there are any extremely promising AI researchers who are convinced by the argument but don’t want to give up their life’s work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.
I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights
Inductive inference is “just a math problem”. That’s the part that models the world—which is what our brain spends most of its time doing. However, it’s probably not “one or two deep insights”. Inductive inference systems seem to be complex and challenging to build.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don’t even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.
I haven’t heard of any justification for why it might only take “nine people and a brain in a box in a basement”. I think some people are too convinced of the AIXI approximation route and therefore believe that it is just a math problem that only takes some thinking and one or two deep insights.
Every success in AI so far relied on a huge team. IBM Watson, Siri, Big Dog or the various self-driving cars:
1)
2)
It takes a company like IBM to design such a narrow AI. More than 100 algorithms. Could it have been done without a lot of computational and intellectual resources?
The basement approach seems ridiculous given the above.
IBM Watson started in a rather small team (2-3 people); IBM started dumping resources on them once they saw serious potential.
I didn’t mean to endorse that. What I was thinking when I wrote “hire the most promising AI researchers to do research in secret” was that if there are any extremely promising AI researchers who are convinced by the argument but don’t want to give up their life’s work, we could hire them to continue in secret just to keep the results away from the public domain. And also to activate suitable contingency plans as needed.
My thoughts on what the main effort should be is still described in Some Thoughts on Singularity Strategies.
Inductive inference is “just a math problem”. That’s the part that models the world—which is what our brain spends most of its time doing. However, it’s probably not “one or two deep insights”. Inductive inference systems seem to be complex and challenging to build.
Everything is a math problem. But that doesn’t mean that you can build a brain by sitting in your basement and literally think it up.
A well-specified math problem, then. By contrast with fusion or space travel.
how is intelligence well specified compared to space travel? We know physics well enough. We know we want to get from point A to point B. The intelligence: we don’t even quite know what do exactly we want from it. We know of some ridiculous towers of exponents slow method, that means precisely nothing.
The claim was: inductive inference is just a math problem. If we know how to build a good quality, general-purpose stream compressor, the problem would be solved.