If it took 300k years to develop human software, and 4-13M years to develop human hardware (starting from our common ancestor with chimpanzees), that seems consistent with Eliezer’s claim that developing the software shouldn’t take all that long _compared with the hardware_. (Eliezer doesn’t say “hardware” but “hard-software”, but unless I misunderstand he’s talking about something fairly close to “software that implements what human brain hardware does”.)
[EDITED to add:] On the other hand, you might expect software to evolve faster than hardware, at any given level of underlying complexity/difficulty/depth, because the relevant timescales for selection of memes are shorter than those for genes. So actually I’m not sure how best to translate timelines of human development into predictions for AI development. There’s no very compelling reason to assume that “faster for evolution” and “faster for human R&D” are close to being the same thing, anyway.
I think you’re responding to this as though it were just a metaphor and not noticing the extent to which it might just be meant literally. If we exit the part of human coordination space where we have a civilization, it could easily take another 300,000 years to get it back. That’s not a generalized claim about software vs hardware development times. It’s a specific claim that the specific “shallow soft-software” Eliezer is referring to might take hundreds of thousands of years to redevelop, regardless of what you might otherwise think about AI software development timelines.
I’m like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about “cultural general-intelligence software”. But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we’ve probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I’m not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I’m not sure it would be a much worse calamity if it took 300k years to repair than if it took “only” 30k years.
I think it matters because of what it implies about how hard a target civilization is to reach. Even if the 300k year process could be sped up a lot by knowing what we’re aiming for, it’s evidence that the end result was a much weaker natural attractor than our current state is, from a starting point of founding civilization at all.
Some factoids from it: “For example, there are nearly 20 million genomic loci that differ between humans and chimpanzees” (but 99 per cent of genome is non-coding regions).
“Another evolutionary approach has been to focus on genomic loci that are well conserved throughout vertebrate evolution but are strikingly different in humans; these regions have been named “human accelerated regions (HARs)” (Bird et al., 2007; Bush and Lahn, 2008; Pollard et al., 2006; Prabhakar et al., 2008). So far, ∼2700 HARs have been identified, again most of them in noncoding regions: at least ∼250 of these HARs seem to function as developmental enhancers in the brain”.
“Comparison of the FOXP2 cDNAs from multiple species indicates that the human FOXP2 protein differs at only 3 amino acid residues from the mouse ortholog, and at 2 residues from the chimpanzee, gorilla, and rhesus macaque orthologs … Mice carrying humanized FoxP2 show accelerated learning, qualitatively different ultrasonic vocalizations, and increased dendrite length and synaptic plasticity in the medium spiny neurons of the striatum.”
So I have an impression that the changes in the genome were rather small, but very effective in fine tuning of the brain by creating new connections between regions, increasing its size etc. The information content of the changes depends not only on the number of single nucleotide changes, but their exact location through all 3 billion pair genome (which needs around 30 bits to code), but the main role was of these 250 HARs, and inside each HAR the change may be rather small, like in case of FOXP2.
Multiplying all that gives that significance difference between chimp and human brain development programs is around 25 000 bits. Not sure if this calculation is right, because there are many other genes and promoters in play.
The soft-software, imho, was what I call “human training dataset” and it includes, first of all, language (and our home environment, all visual production etc). The existence of the feral children which can’t be trained to be human again means that human brain is the universal learning machine (the idea was discussed in LW), but its training dataset is outside of the hardware of the machine.
Currently we have biggest changes in that dataset from ancient time because of Internet etc and if the principles of universal thinking are in the dataset we could lost them, as EY said.
I’m a little late to the game here, but I have a small issue with the above.
I don’t think it is accurate to estimate the size of changes in such a manner, as there is an enormous complex of transcription factors that create interplay between small changes, ones of which we may never see any actual trace or are located outside the genome that affect the genome. SNPs are important (such as those in FOXp2) but not the be all end all factor for those expressions as well—epigenetic factors can drive selection just as effectively as chance mutation creates advantage. Two sides of the same coin, so to speak.
The HARs in question are not only genes, but some of them are connected with multiple sections of the genome in this capacity. They carry with them effects and reactions that are hard to calculate as single instances of information (bit encoding). Activation of some factors may lead to activation/deactivation of other factors. This networking is far too massive to make sense of without intense inquiry (which assuredly they are doing with GWAS on the 250 HARs mentioned above). Which leads to my inquiry—how is it 25000 bits of difference? We did not see the pathway that effectively created that hardware, and much of it could be conceived as data that is environmental—which is what I suppose you’re getting at somewhat, but your rote calculation seems to contradict. Do you simply mean, brain development programs in the actual code? I dont think that is as useful of a perception, as it limits the frame of reference to a small part of the puzzle. Gene expression is much more affected by environmental stimuli than one might perceive, feral children being an interesting point to that regard.
If it took 300k years to develop human software, and 4-13M years to develop human hardware (starting from our common ancestor with chimpanzees), that seems consistent with Eliezer’s claim that developing the software shouldn’t take all that long _compared with the hardware_. (Eliezer doesn’t say “hardware” but “hard-software”, but unless I misunderstand he’s talking about something fairly close to “software that implements what human brain hardware does”.)
[EDITED to add:] On the other hand, you might expect software to evolve faster than hardware, at any given level of underlying complexity/difficulty/depth, because the relevant timescales for selection of memes are shorter than those for genes. So actually I’m not sure how best to translate timelines of human development into predictions for AI development. There’s no very compelling reason to assume that “faster for evolution” and “faster for human R&D” are close to being the same thing, anyway.
I think you’re responding to this as though it were just a metaphor and not noticing the extent to which it might just be meant literally. If we exit the part of human coordination space where we have a civilization, it could easily take another 300,000 years to get it back. That’s not a generalized claim about software vs hardware development times. It’s a specific claim that the specific “shallow soft-software” Eliezer is referring to might take hundreds of thousands of years to redevelop, regardless of what you might otherwise think about AI software development timelines.
I’m like 96% sure it was intended to apply to the question of how much of the work in making an AGI is about “cultural general-intelligence software”. But yeah, I agree that if we destroy our civilization it could take a long time to get it back. Not just because building a civilization takes a long time; also because there are various resources we’ve probably consumed most of the most accessible bits of, and not having such easy access to coal and oil and minerals could make building a new civilization much harder. But I’m not sure what hangs on that (as opposed to the related but separate question of whether we would rebuild civilization if we lost it) -- the destruction of human civilization would be a calamity, but I’m not sure it would be a much worse calamity if it took 300k years to repair than if it took “only” 30k years.
I think it matters because of what it implies about how hard a target civilization is to reach. Even if the 300k year process could be sped up a lot by knowing what we’re aiming for, it’s evidence that the end result was a much weaker natural attractor than our current state is, from a starting point of founding civilization at all.
I found an interesting article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4429600/
Some factoids from it: “For example, there are nearly 20 million genomic loci that differ between humans and chimpanzees” (but 99 per cent of genome is non-coding regions).
“Another evolutionary approach has been to focus on genomic loci that are well conserved throughout vertebrate evolution but are strikingly different in humans; these regions have been named “human accelerated regions (HARs)” (Bird et al., 2007; Bush and Lahn, 2008; Pollard et al., 2006; Prabhakar et al., 2008). So far, ∼2700 HARs have been identified, again most of them in noncoding regions: at least ∼250 of these HARs seem to function as developmental enhancers in the brain”.
“Comparison of the FOXP2 cDNAs from multiple species indicates that the human FOXP2 protein differs at only 3 amino acid residues from the mouse ortholog, and at 2 residues from the chimpanzee, gorilla, and rhesus macaque orthologs … Mice carrying humanized FoxP2 show accelerated learning, qualitatively different ultrasonic vocalizations, and increased dendrite length and synaptic plasticity in the medium spiny neurons of the striatum.”
So I have an impression that the changes in the genome were rather small, but very effective in fine tuning of the brain by creating new connections between regions, increasing its size etc. The information content of the changes depends not only on the number of single nucleotide changes, but their exact location through all 3 billion pair genome (which needs around 30 bits to code), but the main role was of these 250 HARs, and inside each HAR the change may be rather small, like in case of FOXP2.
Multiplying all that gives that significance difference between chimp and human brain development programs is around 25 000 bits. Not sure if this calculation is right, because there are many other genes and promoters in play.
The soft-software, imho, was what I call “human training dataset” and it includes, first of all, language (and our home environment, all visual production etc). The existence of the feral children which can’t be trained to be human again means that human brain is the universal learning machine (the idea was discussed in LW), but its training dataset is outside of the hardware of the machine.
Currently we have biggest changes in that dataset from ancient time because of Internet etc and if the principles of universal thinking are in the dataset we could lost them, as EY said.
I’m a little late to the game here, but I have a small issue with the above.
I don’t think it is accurate to estimate the size of changes in such a manner, as there is an enormous complex of transcription factors that create interplay between small changes, ones of which we may never see any actual trace or are located outside the genome that affect the genome. SNPs are important (such as those in FOXp2) but not the be all end all factor for those expressions as well—epigenetic factors can drive selection just as effectively as chance mutation creates advantage. Two sides of the same coin, so to speak.
The HARs in question are not only genes, but some of them are connected with multiple sections of the genome in this capacity. They carry with them effects and reactions that are hard to calculate as single instances of information (bit encoding). Activation of some factors may lead to activation/deactivation of other factors. This networking is far too massive to make sense of without intense inquiry (which assuredly they are doing with GWAS on the 250 HARs mentioned above). Which leads to my inquiry—how is it 25000 bits of difference? We did not see the pathway that effectively created that hardware, and much of it could be conceived as data that is environmental—which is what I suppose you’re getting at somewhat, but your rote calculation seems to contradict. Do you simply mean, brain development programs in the actual code? I dont think that is as useful of a perception, as it limits the frame of reference to a small part of the puzzle. Gene expression is much more affected by environmental stimuli than one might perceive, feral children being an interesting point to that regard.