You’re probably thinking of the debate over ENCODE. It was a furious debate over what the ENCODE results meant, whether some mere chemical activity proved non-junkness, and whether they even measured the narrow chemical thing they claimed to measure and then based the interpretations on; I didn’t follow it in detail, but my overall impression was that most people were not convinced by the ENCODE claims and continue to regard junk DNA as being pretty junky (or outright harmful, with all the retrotransposons and viruses lurking in it).
Genome synthesis may help answer this in the not too distant future: it’s already been used to create ‘minimal organism’ bacteria genomes which are much smaller, and synthetic genomes without the ‘junk DNA’ are appealing because synthesis costs so much and you want to cut corners as much as possible, so proving empirically the junk DNA doesn’t matter is obvious and valuable.
Ah interesting, - I’d not heard of ENCODE and wasn’t trying to say that there’s no such thing as DNA without function.
The way I remembered it was that 10% of DNA was coding, and then a sizeable proportion of the rest was promoters and introns and such, lots of which had fairly recently been reclaimed from ‘junk’ status. From that wiki, though, it seems that only 1-2% is actually coding.
In any case I’d overlooked the fact that even within genes there’s not going to be sensitivity to every base pair.
I’d be super interested if there were any estimates of how many bits in the genome it would take to encode a bit of a neural wiring algorithm as expressed in minified code. I’d guess the DNA would be wildly inefficient and the size of neural wiring algos expressed in code would actually be much smaller than 7.5MB but then it’s had a lot of time and pressure to maximise the information content so unsure.
You’re probably thinking of the debate over ENCODE. It was a furious debate over what the ENCODE results meant, whether some mere chemical activity proved non-junkness, and whether they even measured the narrow chemical thing they claimed to measure and then based the interpretations on; I didn’t follow it in detail, but my overall impression was that most people were not convinced by the ENCODE claims and continue to regard junk DNA as being pretty junky (or outright harmful, with all the retrotransposons and viruses lurking in it).
Genome synthesis may help answer this in the not too distant future: it’s already been used to create ‘minimal organism’ bacteria genomes which are much smaller, and synthetic genomes without the ‘junk DNA’ are appealing because synthesis costs so much and you want to cut corners as much as possible, so proving empirically the junk DNA doesn’t matter is obvious and valuable.
Ah interesting, - I’d not heard of ENCODE and wasn’t trying to say that there’s no such thing as DNA without function.
The way I remembered it was that 10% of DNA was coding, and then a sizeable proportion of the rest was promoters and introns and such, lots of which had fairly recently been reclaimed from ‘junk’ status. From that wiki, though, it seems that only 1-2% is actually coding.
In any case I’d overlooked the fact that even within genes there’s not going to be sensitivity to every base pair.
I’d be super interested if there were any estimates of how many bits in the genome it would take to encode a bit of a neural wiring algorithm as expressed in minified code. I’d guess the DNA would be wildly inefficient and the size of neural wiring algos expressed in code would actually be much smaller than 7.5MB but then it’s had a lot of time and pressure to maximise the information content so unsure.