I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.
I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.