In the context of “what is the minimal amount of information it takes to build a human brain,” I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.
Do we know how much of the human genome can definitely be replaced with random code without problem?
In addition, do we know how much information is contained in the structure of a cell? You can’t just put the DNA of our genome in water and expect to get a brain. Our DNA resides in an enormously complex sea of nano machines and structures. You need some combination of both to get a brain.
Honestly, I think the important take away is that there are probably a number of deep or high level insights that we need to figure out. Whether it’s 75 mb, 750 mb, or a petabyte doesn’t really matter if most of that information just describes machine parts or functions (e.g., a screw, a bolt, a wheel, etc.). Simple components often take up a lot of information. Frankly, I think 1 mb containing 1000 deep insights at maximum compression would be far more difficult to comprehend than a petabyte containing loads of parts descriptions and only 10 deep insights.
In the context of “what is the minimal amount of information it takes to build a human brain,” I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.
Do we know how much of the human genome can definitely be replaced with random code without problem?
In addition, do we know how much information is contained in the structure of a cell? You can’t just put the DNA of our genome in water and expect to get a brain. Our DNA resides in an enormously complex sea of nano machines and structures. You need some combination of both to get a brain.
Honestly, I think the important take away is that there are probably a number of deep or high level insights that we need to figure out. Whether it’s 75 mb, 750 mb, or a petabyte doesn’t really matter if most of that information just describes machine parts or functions (e.g., a screw, a bolt, a wheel, etc.). Simple components often take up a lot of information. Frankly, I think 1 mb containing 1000 deep insights at maximum compression would be far more difficult to comprehend than a petabyte containing loads of parts descriptions and only 10 deep insights.