There’s no particular reason to doubt that a significant amount of the final data is encoded in the gestational environment.
To the contrary, there is every reason to doubt that. We already know that important pieces of the gestational environment (the genetic code itself, core metabolism, etc.) are encoded in the genome. By contrast, the amount of epigenetic information that we know of is miniscule. It is, of course, likely that we will discover more, but it is very unlikely that we will discover much more. The reason for this skepticism is that we don’t know of any reliable epigenetic means of transmitting generic information from generation to generation. And the epigenetic information inheritance mechanisms that we do understand all require hundreds of times as much genetic information to specify the machinery as compared to the amount of epigenetic information that the machinery can transmit.
To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain. Admittedly, when we do finally construct an AI, it may take it 25 years to get through graduate school, and it may have to read thru several hundred Wikipedia equivalents to get there, but I am very confident that specifying the process for generating the structure and interconnect of the embryonic AI brain will take well under 7 billion bits.
To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ
wrong: The Shannon information content of the genome places a tight upper
bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain.
I think you may have missed my devastating analysis of this issue a couple of years back:
“So, who is right? Does the brain’s design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance—however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that—an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck’s constant—and so on.
At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us.”
You suggest that the human brain might have a high Kolmogorov complexity, the information for which is encoded, not in the human genome (which contains a mere 7 gigabits of information), but rather in the laws of physics, which contain arbitrarily large amounts of information, encoded in the exact values of physical constants. For example, first 30 billion decimal digits of the fine structure constant contain 100 gigabits of information, putting the genome to shame.
Do I have that right?
Well, I will give you points for cleverness, but I’m not buying it. I doubt that it much matters what the constants are, out past the first hundred digits or so. Yes, I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive
dependence both on initial conditions and on physical constants. But I don’t think that really matters. Physical constants haven’t changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain. Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.
I doubt that it much matters what the constants are, out past the first
hundred digits or so
What makes you think that?
I realize that the details of how the universe proceeds may be chaotic; it
may involve sensitive dependence both on initial conditions and on physical
constants. But I don’t think that really matters.
...and why not?
Physical constants haven’t changed since the Cambrian, but genomes have.
And I think that it is the change in genomes which led to the human brain,
the dolphin brain, the parrot brain, and the octopus brain.
Under the hypothesis that physics encodes relevant information, a lot of the
required information was there from the beginning. The fact that brains only became manifest after the Cambrian doesn’t mean the propensity for making brains was not there from the beginning. So: that observation doesn’t tell you very much.
Alter the fine structure constant in the 2 billionth decimal place, and those
brain architectures would still work, and those genomes would still
specify development pathways leading to them. Or so I believe.
Right—but what evidence do you have of that? You are aware of chaos theory, no? Small changes can lead to dramatic changes surprisingly quickly.
Organisms inherit the laws of physics (and indeed the initial conditions of the universe they are in) - as well as their genomes. Information passes down the generations both ways. If you want to claim the design information is in one inheritance channel more than the other one, it seems to me that you need some evidence relating to that issue. The evidence you have presented so far seems pretty worthless—the delayed emergence of brains seems equally compatible with both of the hypotheses under consideration.
No other rational [ETA: I meant physical and I am dumb] process is known to rely on physical constants to the degree you propose. What you propose is not impossible, but it is highly improbable.
Sensitive dependence on initial conditions is an extremely well-known phenomenon. If you change the laws of physics a little bit, the result of a typical game of billiards will be different. This kind of phenomenon is ubiquitous in nature, from the orbit of planets, to the paths rivers take.
If a butterfly’s wing flap can cause a tornado, I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.
Sensitive dependence on initial conditions is literally everywhere. Check it out:
The universe took about 14 billion years to get this far—and if you look into the math of chaos theory, the changes propagate up very rapidly. There is an ever-expanding avalanche of changes—like an atomic explosion.
For the 750mb-or-so of data under discussion, you could easily see the changes at a macroscopic scale rapidly. Atoms in stars bang into each other pretty quickly. I haven’t attempted to calculate it—but probably within a few minutes, I figure.
Would you actually go as far as maintaining that, if a change were to happen tomorrow to the 1,000th decimal place of a physical constant, it would be likely to stop brains from working, or are you just saying that a similar change to a physical constant, if it happened in the past, would have been likely to stop the sequence of events which has caused brains to come into existence?
Option 2. Existing brains might be OK—but I think newly-constructed ones would have to not work properly when they matured. So, option 2 would not be enough on its own.
Thanks for telling me that. I was logged in and didn’t see it, but I will look more carefully next time.
I’m actually proof-reading a document now which improves the “action selection process”. I was never happy with what I described and it was a kind of placeholder. The new stuff will be very short though.
Anyway, what do you do? I have the idea it is something computer related, maybe?
Apologies for the comment I inadvertently placed here. I thought I was answering a PM and did not mean to add personal exchanges. I find computers annoying sometimes, and will happily stop using them when something else that is Turing equivalent becomes available.
I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.
First, that is VERY different than the design information being in the constant, but not in the genome. (you could more validly say that the genome is what it is because the constant is precisely what it is.)
Second, the billiard ball example is invalid. It doesn’t matter exactly where the billiard balls are if you’re getting hustled. Neurons are not typically sensitive to the precise positions of their atoms. Information processing relies on the ability to largely overlook noise.
What physical process would cease to function if you increased c by a billionth of a percent? Or one of the other Planck units? Processes involved in the functioning of both neurons and transistors don’t count, because then there’s no difference to account for.
Would I be correct in thinking that one would need to modify the relationship of c to some other constant (the physics equation that represent some physical law?) for the change to be meaningful? I may be failing to understand the idea of dimension.
Thank you for the excuse to learn more math, by the way.
Yes, you would be correct, at least in terms of our current knowledge.
In fact, it’s not that unusual to choose units so that you can set c = 1 (ie, to make it unitless). This way units of time and units of distance are the same kind, velocities are dimensionless geometric quantities, etc...
You might want to think of “c” not so much as a speed as a conversion factor between distance type units and time type units.
That isn’t really the idea. It would have to interfere with the development of a baby enough for its brain not to work out properly as an adult, though—I figure.
To the contrary, there is every reason to doubt that. We already know that important pieces of the gestational environment (the genetic code itself, core metabolism, etc.) are encoded in the genome. By contrast, the amount of epigenetic information that we know of is miniscule. It is, of course, likely that we will discover more, but it is very unlikely that we will discover much more. The reason for this skepticism is that we don’t know of any reliable epigenetic means of transmitting generic information from generation to generation. And the epigenetic information inheritance mechanisms that we do understand all require hundreds of times as much genetic information to specify the machinery as compared to the amount of epigenetic information that the machinery can transmit.
To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain. Admittedly, when we do finally construct an AI, it may take it 25 years to get through graduate school, and it may have to read thru several hundred Wikipedia equivalents to get there, but I am very confident that specifying the process for generating the structure and interconnect of the embryonic AI brain will take well under 7 billion bits.
I think you may have missed my devastating analysis of this issue a couple of years back:
“So, who is right? Does the brain’s design fit into the genome? - or not?
The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.
We can safely ignore the contribution of cytoplasmic inheritance—however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that—an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck’s constant—and so on.
At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us.”
http://alife.co.uk/essays/how_long_before_superintelligence/
You suggest that the human brain might have a high Kolmogorov complexity, the information for which is encoded, not in the human genome (which contains a mere 7 gigabits of information), but rather in the laws of physics, which contain arbitrarily large amounts of information, encoded in the exact values of physical constants. For example, first 30 billion decimal digits of the fine structure constant contain 100 gigabits of information, putting the genome to shame.
Do I have that right?
Well, I will give you points for cleverness, but I’m not buying it. I doubt that it much matters what the constants are, out past the first hundred digits or so. Yes, I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive dependence both on initial conditions and on physical constants. But I don’t think that really matters. Physical constants haven’t changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain. Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.
What makes you think that?
...and why not?
Under the hypothesis that physics encodes relevant information, a lot of the required information was there from the beginning. The fact that brains only became manifest after the Cambrian doesn’t mean the propensity for making brains was not there from the beginning. So: that observation doesn’t tell you very much.
Right—but what evidence do you have of that? You are aware of chaos theory, no? Small changes can lead to dramatic changes surprisingly quickly.
Organisms inherit the laws of physics (and indeed the initial conditions of the universe they are in) - as well as their genomes. Information passes down the generations both ways. If you want to claim the design information is in one inheritance channel more than the other one, it seems to me that you need some evidence relating to that issue. The evidence you have presented so far seems pretty worthless—the delayed emergence of brains seems equally compatible with both of the hypotheses under consideration.
So: do you have any other relevant evidence?
No other rational [ETA: I meant physical and I am dumb] process is known to rely on physical constants to the degree you propose. What you propose is not impossible, but it is highly improbable.
What?!? What makes you think that?
Sensitive dependence on initial conditions is an extremely well-known phenomenon. If you change the laws of physics a little bit, the result of a typical game of billiards will be different. This kind of phenomenon is ubiquitous in nature, from the orbit of planets, to the paths rivers take.
If a butterfly’s wing flap can cause a tornado, I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.
Sensitive dependence on initial conditions is literally everywhere. Check it out:
http://en.wikipedia.org/wiki/Chaos_theory
Did you miss this bit:
Sensitivity to initial conditions is one thing. Sensitivity to 1 billion SF in a couple of decades?
The universe took about 14 billion years to get this far—and if you look into the math of chaos theory, the changes propagate up very rapidly. There is an ever-expanding avalanche of changes—like an atomic explosion.
For the 750mb-or-so of data under discussion, you could easily see the changes at a macroscopic scale rapidly. Atoms in stars bang into each other pretty quickly. I haven’t attempted to calculate it—but probably within a few minutes, I figure.
Would you actually go as far as maintaining that, if a change were to happen tomorrow to the 1,000th decimal place of a physical constant, it would be likely to stop brains from working, or are you just saying that a similar change to a physical constant, if it happened in the past, would have been likely to stop the sequence of events which has caused brains to come into existence?
Option 2. Existing brains might be OK—but I think newly-constructed ones would have to not work properly when they matured. So, option 2 would not be enough on its own.
Correction: That last line should be “which has CAUSED brains to come into existence?”
You can edit comments after submitting them—when logged in, you should see an edit button.
By the way, I’m reading your part 15, section 2 now.
Hi Silas!
Thanks for telling me that. I was logged in and didn’t see it, but I will look more carefully next time.
I’m actually proof-reading a document now which improves the “action selection process”. I was never happy with what I described and it was a kind of placeholder. The new stuff will be very short though.
Anyway, what do you do? I have the idea it is something computer related, maybe?
Apologies for the comment I inadvertently placed here. I thought I was answering a PM and did not mean to add personal exchanges. I find computers annoying sometimes, and will happily stop using them when something else that is Turing equivalent becomes available.
First, that is VERY different than the design information being in the constant, but not in the genome. (you could more validly say that the genome is what it is because the constant is precisely what it is.)
Second, the billiard ball example is invalid. It doesn’t matter exactly where the billiard balls are if you’re getting hustled. Neurons are not typically sensitive to the precise positions of their atoms. Information processing relies on the ability to largely overlook noise.
What physical process would cease to function if you increased c by a billionth of a percent? Or one of the other Planck units? Processes involved in the functioning of both neurons and transistors don’t count, because then there’s no difference to account for.
Nitpick: c is a dimensioned quantity, so changes in it aren’t necessarily meaningful.
*Blink.*
*Reads Wikipedia.*
Would I be correct in thinking that one would need to modify the relationship of c to some other constant (the physics equation that represent some physical law?) for the change to be meaningful? I may be failing to understand the idea of dimension.
Thank you for the excuse to learn more math, by the way.
Yes, you would be correct, at least in terms of our current knowledge.
In fact, it’s not that unusual to choose units so that you can set c = 1 (ie, to make it unitless). This way units of time and units of distance are the same kind, velocities are dimensionless geometric quantities, etc...
You might want to think of “c” not so much as a speed as a conversion factor between distance type units and time type units.
That isn’t really the idea. It would have to interfere with the development of a baby enough for its brain not to work out properly as an adult, though—I figure.