As a biologist, my mind goes elsewhere. Specifically, to how viruses are by far the most common biological entity on Earth and how most coding sequences in your genome are selfish replicating reverse transcription apparatuses trying to shove copies of their own sequences anywhere they can get into. Other examples abound, including how the very spliceosome itself in the eukaryotic transcription apparatus seems to be a defense against invasion of the eukaryotic genome by reverse-transcribing introns that natural selection could no longer purge once eukaryotic cell size rose and population size fell enough to weaken natural selection against mild deleterious mutations, but in the process entrenched those selfish elements into something that could no longer leave.
tangential, but my intuitive sense is that the ideal long term ai safety algorithm would need to help those who wish to retain biology give their genetic selfish elements a stern molecular talking-to, by constructing a complex but lightweight genetic immune system that can reduce their tendency to cause new deleterious mutations. among a great many other extremely complex hotfixes necessary to shore up error defenses necessary to make a dna-based biological organism truly information-immortal-yet-runtime-adaptable.
I have not.
As a biologist, my mind goes elsewhere. Specifically, to how viruses are by far the most common biological entity on Earth and how most coding sequences in your genome are selfish replicating reverse transcription apparatuses trying to shove copies of their own sequences anywhere they can get into. Other examples abound, including how the very spliceosome itself in the eukaryotic transcription apparatus seems to be a defense against invasion of the eukaryotic genome by reverse-transcribing introns that natural selection could no longer purge once eukaryotic cell size rose and population size fell enough to weaken natural selection against mild deleterious mutations, but in the process entrenched those selfish elements into something that could no longer leave.
tangential, but my intuitive sense is that the ideal long term ai safety algorithm would need to help those who wish to retain biology give their genetic selfish elements a stern molecular talking-to, by constructing a complex but lightweight genetic immune system that can reduce their tendency to cause new deleterious mutations. among a great many other extremely complex hotfixes necessary to shore up error defenses necessary to make a dna-based biological organism truly information-immortal-yet-runtime-adaptable.