But I also don’t see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code.
Do you have any reason to expect it to be the same? Do we have any reason at all? I’m not arguing that it will take more than 50MBs of code, I’m arguing that the DNA value is not informative.
The things about intelligence that are the closest to biology (interfacing with the real world, how one neuron functions) are also the kind of things that we can already do quite well with computer programs.
We are far less good at the doing the equivalent of changing neural structure or adding new neurons (we don’t know why or how neurogenesis works for one) in computer programs.
But I also don’t see a reason for intelligence to be easier to express with messy biology and chemistry than with computer code.
Do you have any reason to expect it to be the same? Do we have any reason at all?
If I know a certain concept X requires 12 seconds of speech to express in English, and I don’t know anything about Swahili beyond the fact that it’s a human language, my first guess will be that concept X requires 12 seconds of speech to express in Swahili.
I would also express compressed versions of translations in various languages of the same book to be roughly the same size.
So, even with very little information, a first estimate (with a big error margin) would be that it takes as many bits to “encode” intelligence in DNA than it does in computer code.
In addition, the fact that some intelligence-related abilities such as multiplying large numbers are easy to express in computer code, but rare in nature would make me revise that estimate towards “code as more expressive than DNA for some intelligence-related stuff”.
In addition, knowledge about the history of evolution would make me suspect that large chunks of the human genome are not required for intelligence, either because they aren’t expressed, or because they only concern traits that have no impact on our intelligence beyond the fact of keeping us alive. That would also make me revise my estimate downwards for the code size needed for intelligence.
None of those are very strong reasons, but they are reasons nonetheless!
If I know a certain concept X requires 12 seconds of speech to express in English, and I don’t know anything about Swahili beyond the fact that it’s a human language, my first guess will be that concept X requires 12 seconds of speech to express in Swahili.
You’d be very wrong for a lot of technical language, unless they just imported the English words whole sale. For example, “Algorithmic Information Theory,” expresses a concept well but I’m guessing it would be hard to explain in Swahili.
Even given that, you can expect the languages of humans to all have roughly the same length because they are generated by the roughly the same hardware and have roughly the same concerns. E.g. things to do with humans.
To give a more realistic translation problem, how long would you expect it to take to express/explain any random English in C code or vice versa?
Do you have any reason to expect it to be the same? Do we have any reason at all? I’m not arguing that it will take more than 50MBs of code, I’m arguing that the DNA value is not informative.
We are far less good at the doing the equivalent of changing neural structure or adding new neurons (we don’t know why or how neurogenesis works for one) in computer programs.
If I know a certain concept X requires 12 seconds of speech to express in English, and I don’t know anything about Swahili beyond the fact that it’s a human language, my first guess will be that concept X requires 12 seconds of speech to express in Swahili.
I would also express compressed versions of translations in various languages of the same book to be roughly the same size.
So, even with very little information, a first estimate (with a big error margin) would be that it takes as many bits to “encode” intelligence in DNA than it does in computer code.
In addition, the fact that some intelligence-related abilities such as multiplying large numbers are easy to express in computer code, but rare in nature would make me revise that estimate towards “code as more expressive than DNA for some intelligence-related stuff”.
In addition, knowledge about the history of evolution would make me suspect that large chunks of the human genome are not required for intelligence, either because they aren’t expressed, or because they only concern traits that have no impact on our intelligence beyond the fact of keeping us alive. That would also make me revise my estimate downwards for the code size needed for intelligence.
None of those are very strong reasons, but they are reasons nonetheless!
You’d be very wrong for a lot of technical language, unless they just imported the English words whole sale. For example, “Algorithmic Information Theory,” expresses a concept well but I’m guessing it would be hard to explain in Swahili.
Even given that, you can expect the languages of humans to all have roughly the same length because they are generated by the roughly the same hardware and have roughly the same concerns. E.g. things to do with humans.
To give a more realistic translation problem, how long would you expect it to take to express/explain any random English in C code or vice versa?
Selecting a random English sentence will introduce a bias towards concepts that are easy to express in English.