This might come down to eugenics. Imagine that in 15 years, with the help of genetic engineering, lots of extremely high IQ people are born, and their superior intelligence means that in another 15 or so years (absent a singularity) they will totally dominate AGI software development. The faster the economic growth rate the more likely that AGI will be developed before these super-geniuses come of age.
Are these high-IQ folk selectively working on FAI rather than AGI to a sufficient degree to make up for UFAI’s inherently greater parallelizability?
EDIT: Actually, smarter researchers probably count for more relative bonus points on FAI than on UFAI to a greater extent than even differences of serial depth of cognition, so it’s hard to see how this could be realistically bad. Reversal test, dumber researchers everywhere would not help FAI over UFAI.
I’m not sure, this would depend on their personalities. But you might learn a lot about their personalities while they were still too young to be effective programmers. In one future earth you might trust them and hope for enough time for them to come of age, whereas in another you might be desperately trying to create a foom before they overtake you.
Hopefully, lots of the variance in human intelligence comes down to genetic load, having a low genetic load often makes you an all around great and extremely smart person, someone like William Marshal, and we soon create babies with extremely low genetic loads. If this is to be our future we should probably hope for slow economic growth.
Extremely high IQ arising from engineering… is that not AI?
This is not a joke. UAI is essentially the fear that “we” will be replaced by another form of intelligence, outcompeted for resources by what is essentially another life form.
But how do “we” not face the same threat from an engineered lifeform just because some of the ingredients are us? If such a new engineered lifeform replaces natural humanity, is that not a UAI? If we can build in some curator instinct or 3 laws or whatever into this engineered superhuman, is that not FAI?
The interesting thing to me here is what we mean by “we.” I think it is more common for a lesswrong poster to identify as “we” with an engineered superhuman in meat substrate than with an engineered non-human intelligence in non-meat substrate.
Considering this, maybe an FAI is just an AI that learns enough about what we think of as human so that it can hack it. It could construct itself so that it felt to us like our descendent, our child. Then “we” do not resent the AI for taking all “our” resources because the AI has successfully lead us to be happy to see our child succeed beyond what we managed.
Perhaps one might say but of course this would be on our list of things we would define as unfriendly. Then we build AI’s that “curate” humans as we are now and we are precluding from enhancing ourselves or evolving past some limit we have preprogrammed in to our FAI?
This might come down to eugenics. Imagine that in 15 years, with the help of genetic engineering, lots of extremely high IQ people are born, and their superior intelligence means that in another 15 or so years (absent a singularity) they will totally dominate AGI software development. The faster the economic growth rate the more likely that AGI will be developed before these super-geniuses come of age.
Are these high-IQ folk selectively working on FAI rather than AGI to a sufficient degree to make up for UFAI’s inherently greater parallelizability?
EDIT: Actually, smarter researchers probably count for more relative bonus points on FAI than on UFAI to a greater extent than even differences of serial depth of cognition, so it’s hard to see how this could be realistically bad. Reversal test, dumber researchers everywhere would not help FAI over UFAI.
I’m not sure, this would depend on their personalities. But you might learn a lot about their personalities while they were still too young to be effective programmers. In one future earth you might trust them and hope for enough time for them to come of age, whereas in another you might be desperately trying to create a foom before they overtake you.
Hopefully, lots of the variance in human intelligence comes down to genetic load, having a low genetic load often makes you an all around great and extremely smart person, someone like William Marshal, and we soon create babies with extremely low genetic loads. If this is to be our future we should probably hope for slow economic growth.
Extremely high IQ arising from engineering… is that not AI?
This is not a joke. UAI is essentially the fear that “we” will be replaced by another form of intelligence, outcompeted for resources by what is essentially another life form.
But how do “we” not face the same threat from an engineered lifeform just because some of the ingredients are us? If such a new engineered lifeform replaces natural humanity, is that not a UAI? If we can build in some curator instinct or 3 laws or whatever into this engineered superhuman, is that not FAI?
The interesting thing to me here is what we mean by “we.” I think it is more common for a lesswrong poster to identify as “we” with an engineered superhuman in meat substrate than with an engineered non-human intelligence in non-meat substrate.
Considering this, maybe an FAI is just an AI that learns enough about what we think of as human so that it can hack it. It could construct itself so that it felt to us like our descendent, our child. Then “we” do not resent the AI for taking all “our” resources because the AI has successfully lead us to be happy to see our child succeed beyond what we managed.
Perhaps one might say but of course this would be on our list of things we would define as unfriendly. Then we build AI’s that “curate” humans as we are now and we are precluding from enhancing ourselves or evolving past some limit we have preprogrammed in to our FAI?
http://lesswrong.com/lw/erj/parenting_and_happiness/94th?context=3