Vassar wrote:
I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well… I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.
I have no clear idea what you mean by “level” in the above...
IQ?
Demonstrated scientific or mathematical accomplishments?
Degree of agreement with your belief system? ;-)
-- Ben G
I think there is a well-understood, rather common phrase for the approach of “thinking about AGI issues and trying to understand them, because you don’t feel you know enough to build an AGI yet.”
This is quite simply “theoretical AI research” and it occupies a nontrivial percentage of the academic AI research community today.
Your (Eliezer’s) motivations for pursuing theoretical rather than practical AGI research are a little different from usual—but, the basic idea of trying to understand the issues theoretically, mathematically and conceptually before messing with code, is not terribly odd....
Personally I think both theoretical and practical AGI research are valuable, and I’m glad both are being pursued.
I’m a bit of a skeptic that big AGI breakthroughs are going to occur via theory alone, but, you never know … history shows it is very hard to predict where a big discovery is going to come from.
And, hypothetically, let’s suppose someone does come up with a big AGI breakthrough from a practical direction (like, say, oh, the OpenCogPrime team… ;-); then it will be very good that there exist individuals (like yourself) who have thought very deeply about the theoretical aspects of AGI, FAI and so forth … you and other such individuals will be extremely well positioned to help guide thinking on the next practical steps after the breakthrough...
-- Ben G