I would only trust this strategy with hyper-neuromorphic artificial intelligence. And that’s unlikely to FOOM uncontrollably anyway. In general, the applicability of such a strategy depends on the structure of the AI, but the line at which it might be applicable is tiny hyperbubble in mind space centered around humans. Anything more alien than that, and it’s a profoundly naive idea.
Thanks for sharing your personal feeling on this matter. However, I’d be more interested if you had some sort of rational argument in favor of your position!
The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)
To put it differently: What are the properties you think a mind needs to have, in order for the “raise a nice baby AGI” approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?
Well, consider this: it takes only a very small functional change to the human brain to make ‘raising it as a human child’ a questionable strategy at best. Crippling a few features of the brain produces sociopaths who, notably, cannot be reliably inculcated with our values, despite sharing 99.99etc% of our own neurological architecture.
Mind space is a tricky thing to pin down in a useful way, so let’s just say the bubble is really tiny. If the changes your making are larger than the changes between a sociopath and a neurotypical human, then you shouldn’t employ this strategy. Trying to use it on any kind of denovo AI without anything analagous to our neurons is foolhardy beyond belief. So much of our behavior is predated on things that aren’t and can’t be learned, and trying to program all of those qualities and intuitions by hand so that the AI can be properly taught our value scheme looks broadly isomorphic to the FAI problem.
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you’ll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they’ve also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it’s going to be awhile before we have a complete picture.
An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he’s somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn’t especially credible.
I think some cross cultural human studies might be a way of starting to answer this question. Looking at autists, or other non-neurotypical minds, would also be helpful. Studying sociopaths or psychopaths would also be important (they pass our society’s behaviour filters, and yet misbehave). The errors of early AGIs (as long as they’re left unpatched!!!) will also be very revealing, and let us try and trace the countours of non-human minds, and get insights into human minds as well. Formal philosophical measures (what kind of consistent long term behaviours can exist in theory?) may also help.
More ideas will no doubt spring to mind—if you want, we can design a research program!
I would only trust this strategy with hyper-neuromorphic artificial intelligence. And that’s unlikely to FOOM uncontrollably anyway. In general, the applicability of such a strategy depends on the structure of the AI, but the line at which it might be applicable is tiny hyperbubble in mind space centered around humans. Anything more alien than that, and it’s a profoundly naive idea.
Yes. That’s pretty much my point.
Thanks for sharing your personal feeling on this matter. However, I’d be more interested if you had some sort of rational argument in favor of your position!
The key issue is the tininess of the hyperbubble you describe, right? Do you have some sort of argument regarding some specific estimate of the measure of this hyperbubble? (And do you have some specific measure on mindspace in mind?)
To put it differently: What are the properties you think a mind needs to have, in order for the “raise a nice baby AGI” approach to have a reasonable chance of effectiveness? Which are the properties of the human mind that you think are necessary for this to be the case?
Well, consider this: it takes only a very small functional change to the human brain to make ‘raising it as a human child’ a questionable strategy at best. Crippling a few features of the brain produces sociopaths who, notably, cannot be reliably inculcated with our values, despite sharing 99.99etc% of our own neurological architecture.
Mind space is a tricky thing to pin down in a useful way, so let’s just say the bubble is really tiny. If the changes your making are larger than the changes between a sociopath and a neurotypical human, then you shouldn’t employ this strategy. Trying to use it on any kind of denovo AI without anything analagous to our neurons is foolhardy beyond belief. So much of our behavior is predated on things that aren’t and can’t be learned, and trying to program all of those qualities and intuitions by hand so that the AI can be properly taught our value scheme looks broadly isomorphic to the FAI problem.
Human children respond to normal child-rearing practices the way they do because of specific functional adaptations of the human mind. This general principle applies to everything from language acquisition to parent-child bonding to acculturation. Expose a monkey, dog, fish or alien to the same environment, and you’ll get a different outcome.
Unfortunately, while the cog sci community has produced reams of evidence on this point they’ve also discovered that said adaptations are very complex, and mapping out in detail what they all are and how they work is turning out to be a long research project. Partial results exist for a lot of intriguing examples, along with data on what goes wrong when different pieces are broken, but it’s going to be awhile before we have a complete picture.
An AI researcher who claims his program will respond like a human child is implicitly claiming either that this whole body of research is wrong (in which case I want to see evidence), or that he’s somehow implemented all the necessary adaptations in code despite the fact that no one knows how they all work (yea, right). Either way, this isn’t especially credible.
I think some cross cultural human studies might be a way of starting to answer this question. Looking at autists, or other non-neurotypical minds, would also be helpful. Studying sociopaths or psychopaths would also be important (they pass our society’s behaviour filters, and yet misbehave). The errors of early AGIs (as long as they’re left unpatched!!!) will also be very revealing, and let us try and trace the countours of non-human minds, and get insights into human minds as well. Formal philosophical measures (what kind of consistent long term behaviours can exist in theory?) may also help.
More ideas will no doubt spring to mind—if you want, we can design a research program!