I don’t classify “convergently learned” as an instance of “directly specified”, but rather “indirectly specified, in conjunction with the requisite environmental data.” Here’s an example. I think that humans’ reliably-learned edge detectors in V1 are not “directly specified”, in the same way that vision models don’t have directly specified curve detectors, but these detectors are convergently learned in order to do well on vision tasks.
If I say “sunk cost is directly specified”, I mean something like “the genome specifies neural circuitry which will eventually, in situations where sunk cost arises, fire so as to influence decision-making.” However, if, for example, the genome lays out the macrostructure of the connectome and the broad-scale learning process and some reward circuitry and regional learning hyperparameters and some other details, and then this brain eventually comes to implement a sunk-cost bias, I don’t call that “direct specification.”
I wish I had been more explicit about “direct specification”, and perhaps this comment is still not clear. Please let me know if so!
I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.
I don’t classify “convergently learned” as an instance of “directly specified”, but rather “indirectly specified, in conjunction with the requisite environmental data.” Here’s an example. I think that humans’ reliably-learned edge detectors in V1 are not “directly specified”, in the same way that vision models don’t have directly specified curve detectors, but these detectors are convergently learned in order to do well on vision tasks.
If I say “sunk cost is directly specified”, I mean something like “the genome specifies neural circuitry which will eventually, in situations where sunk cost arises, fire so as to influence decision-making.” However, if, for example, the genome lays out the macrostructure of the connectome and the broad-scale learning process and some reward circuitry and regional learning hyperparameters and some other details, and then this brain eventually comes to implement a sunk-cost bias, I don’t call that “direct specification.”
I wish I had been more explicit about “direct specification”, and perhaps this comment is still not clear. Please let me know if so!
I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.