Hmm. I’ve read most of the sequences and some of his monograph on FAI, but I don’t recall him explicitly arguing against dividing up the work into sub-problems. Intuitively it seems that if you trust person X to cautiously do FAI then you should trust them to be able to pick out sub-problems and be able to determine when it is the case that they have been solved satisfactorily.
Could you point me out to the relevant links?
Also, I might be terribly mistaken here, but it seems like not every component of the AGI puzzle need be tied directly to FAI, at least in the development phase. Each one must be fully understood and integrated into an FAI, but I don’t see why, say, I need to be as careful when designing a concept model of an ontology engine that can combine and generalize concepts, at least until I want to integrate it into the FAI, at which point the FAI people could review the best attempts at the various components of AGI and figure out whether any of them are acceptable and then how to integrate them into an FAI framework.
I guess so, but I distinctly remember some writing Eleizer did what gave the strong impression that if your IQ was below 9000 you shouldn’t try to do anything but give the SIAI money. I don’t remember from where thou and it certainly sounds weird, so maybe my memory just messed up.
Going solely on what Eliezer has said about ‘exceeding your role models’, I would take that with a grain of salt. I’ve never met Eliezer, but although he comes off as extremely intelligent, judging by his writings and level of achievement (which are impressive) he still does not come off to me as, say, Von Neumann intelligent.
Eliezer’s writings have clarified my thoughts a great deal and given me a stronger sense of purpose. He is a very intelligent researcher and gifted explainer and evangelist, but I don’t take his word as Gospel, I take it as generally very good advice.
I’ve never met Eliezer, but although he comes off as extremely intelligent, judging by his writings and level of achievement (which are impressive) he still does not come off to me as, say, Von Neumann intelligent.
I’ve never met Eliezer, but although he comes off as extremely intelligent, judging by his writings and level of achievement (which are impressive) he still does not come off to me as, say, Von Neumann intelligent.
Wouldn’t dispute that.
I seem to recall you saying as much—at one time as not quite like a thousand year old vampire and at another as not ‘glittery’. It only occurs to me now that that combination make Jaynes a thousand year old Twilight vampire. Somehow that takes some of the impressiveness out of the metaphor, Luminosity revamp (cough) or not!
I suspect but am not certain that you’re thinking of “So You Want To Be A Seed AI Programmer”. I also suspect that the document is at least partially out of date, however.
Thanks,and yea that MIGHT work… But so far everything Eliezer have said indicates the opposite and argues it very well.
I’m mostly hoping for some VERY far removed sub-sub-problem maybe.
Hmm. I’ve read most of the sequences and some of his monograph on FAI, but I don’t recall him explicitly arguing against dividing up the work into sub-problems. Intuitively it seems that if you trust person X to cautiously do FAI then you should trust them to be able to pick out sub-problems and be able to determine when it is the case that they have been solved satisfactorily.
Could you point me out to the relevant links?
Also, I might be terribly mistaken here, but it seems like not every component of the AGI puzzle need be tied directly to FAI, at least in the development phase. Each one must be fully understood and integrated into an FAI, but I don’t see why, say, I need to be as careful when designing a concept model of an ontology engine that can combine and generalize concepts, at least until I want to integrate it into the FAI, at which point the FAI people could review the best attempts at the various components of AGI and figure out whether any of them are acceptable and then how to integrate them into an FAI framework.
I guess so, but I distinctly remember some writing Eleizer did what gave the strong impression that if your IQ was below 9000 you shouldn’t try to do anything but give the SIAI money. I don’t remember from where thou and it certainly sounds weird, so maybe my memory just messed up.
Going solely on what Eliezer has said about ‘exceeding your role models’, I would take that with a grain of salt. I’ve never met Eliezer, but although he comes off as extremely intelligent, judging by his writings and level of achievement (which are impressive) he still does not come off to me as, say, Von Neumann intelligent.
Eliezer’s writings have clarified my thoughts a great deal and given me a stronger sense of purpose. He is a very intelligent researcher and gifted explainer and evangelist, but I don’t take his word as Gospel, I take it as generally very good advice.
Wouldn’t dispute that.
I seem to recall you saying as much—at one time as not quite like a thousand year old vampire and at another as not ‘glittery’. It only occurs to me now that that combination make Jaynes a thousand year old Twilight vampire. Somehow that takes some of the impressiveness out of the metaphor, Luminosity revamp (cough) or not!
I imagine you are probably thinking of something like this.
Yes! In fact, I’m pretty sure it’s not just somehting like that, but that exact specific page.
I suspect but am not certain that you’re thinking of “So You Want To Be A Seed AI Programmer”. I also suspect that the document is at least partially out of date, however.
Yup, correct!
Sounds like he might have been talking about this.
Or possibly about this.
umm, the 9000 thing was me interpreting. I don’t think the article I’m talking about even explicitly mentioned IQ.