I sincerely believe I have a recipe for creating a human-level thinking machine! In an ethical way, and with computing resources currently at our disposal.
I’m a little bit suspicious, but in an ethical way? Reminds of the an argument by Greg Egan:
What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.
This is potentially an important issue in the real world. It might not be long before people are seriously trying to “evolve” artificial intelligence in their computers. Now, it’s one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination. If the first AI was created that way, it would have every right to despise its creators. [The Dust Theory: FAQ]
I want to highlight the difficulties involved in some other problem besides AGI, namely P vs. NP:
P vs. NP is an absolutely enormous problem, and one way of seeing that is that there are already vastly, vastly easier questions that would be implied by P not equal to NP but that we already don’t know how to answer. So basically, if someone is claiming to prove P not equal to NP, then they’re sort of jumping 20 or 30 nontrivial steps beyond what we know today. (...) We have very strong reasons to believe that these problems cannot be solved without major — enormous — advances in human knowledge. (...) So in order to prove such a thing, a prerequisite to it is to understand the space of all possible efficient algorithms. That is an unbelievably tall order. So the expectation is that on the way to proving such a thing, we’re going to learn an enormous amount about efficient algorithms, beyond what we already know, and very, very likely discover new algorithms that will likely have applications that we can’t even foresee right now. [3 questions: P vs. NP.]
So AGI is that much easier to solve than a computational problem like P vs. NP that I am to believe Ben Goertzel here? Besides, take for example a constrained well-understood domain like Go. AI does still perform awfully at Go. So far I believed that it would take at least one paradigm-shattering conceptual revolution before someone comes up with AGI. Sure, you don’t have to solve any problem but that of a AGI. I’m completely unable to judge any claims here, still by gut feeling that is questionable.
Also if this is true, then what about friendly AI? Is he claiming that he solved the problem of TDT as well?
I believe I once read that Marvin Minsky basically claims the same, with enough money he can build an AGI.
This gives me a new approach to the self-enhancement problem: Use OpenCog to tackle P vs NP. That is, use OpenCog tools to develop a system capable of representing and “thinking about” the problems of computational complexity theory. The models of self-enhancement that we have now, like Schmidhuber’s Godel machine, are like AIXI, they are brute-force starting points that might take forever to pick up speed. But if we design a system specifically to tackle the advanced problems of theoretical computer science, it will start out with concepts and heuristics likely to assist efficient self-enhancement, rather than having to discover all of them by itself.
Before the test, an ensemble of copies of
the AGI would be created, with identical knowledge state.
Each copy would interact with a different human teacher,
who would demonstrate to it a certain behavior.
...
The multiple copies may,
depending on the AGI system design, then be able to be
reintegrated,
I’m a little bit suspicious, but in an ethical way? Reminds of the an argument by Greg Egan:
I want to highlight the difficulties involved in some other problem besides AGI, namely P vs. NP:
So AGI is that much easier to solve than a computational problem like P vs. NP that I am to believe Ben Goertzel here? Besides, take for example a constrained well-understood domain like Go. AI does still perform awfully at Go. So far I believed that it would take at least one paradigm-shattering conceptual revolution before someone comes up with AGI. Sure, you don’t have to solve any problem but that of a AGI. I’m completely unable to judge any claims here, still by gut feeling that is questionable.
Also if this is true, then what about friendly AI? Is he claiming that he solved the problem of TDT as well?
I believe I once read that Marvin Minsky basically claims the same, with enough money he can build an AGI.
This gives me a new approach to the self-enhancement problem: Use OpenCog to tackle P vs NP. That is, use OpenCog tools to develop a system capable of representing and “thinking about” the problems of computational complexity theory. The models of self-enhancement that we have now, like Schmidhuber’s Godel machine, are like AIXI, they are brute-force starting points that might take forever to pick up speed. But if we design a system specifically to tackle the advanced problems of theoretical computer science, it will start out with concepts and heuristics likely to assist efficient self-enhancement, rather than having to discover all of them by itself.
Re: Greg Egan
From: http://wiki.opencog.org/wikihome/images/3/39/Preschool.pdf
So let’s make multiple divergent copies per day, and maybe “re-integrate” them if we decide to design for that.