I’d note that it’s possible for an organism to learn to behave (and think) in accordance with the “simple mathematical theory of agency” you’re talking about, without said theory being directly specified by the genome. If the theory of agency really is computationally simple, then many learning processes probably converge towards implementing something like that theory, simply as a result of being optimized to act coherently in an environment over time.
Well, how do you define “directly specified”? If human brains reliably converge towards a certain algorithm, then effectively this algorithm is specified by the genome. The real question is, which parts depends only on genes and which parts depend on the environment. My tentative opinion is that the majority is in the genes, since humans are, broadly speaking, pretty similar to each other. One environment effect is, feral humans grow up with serious mental problems. But, my guess is, this is not because of missing “values” or “biases”, but (to 1st approximation) because they lack the ability to think in language. Another contender for the environment-dependent part is cultural values. But even here, I suspect that humans just follow social incentives rather than acquire cultural values as an immutable part of their own utility function. I admit that it’s difficult to be sure about this.
I don’t classify “convergently learned” as an instance of “directly specified”, but rather “indirectly specified, in conjunction with the requisite environmental data.” Here’s an example. I think that humans’ reliably-learned edge detectors in V1 are not “directly specified”, in the same way that vision models don’t have directly specified curve detectors, but these detectors are convergently learned in order to do well on vision tasks.
If I say “sunk cost is directly specified”, I mean something like “the genome specifies neural circuitry which will eventually, in situations where sunk cost arises, fire so as to influence decision-making.” However, if, for example, the genome lays out the macrostructure of the connectome and the broad-scale learning process and some reward circuitry and regional learning hyperparameters and some other details, and then this brain eventually comes to implement a sunk-cost bias, I don’t call that “direct specification.”
I wish I had been more explicit about “direct specification”, and perhaps this comment is still not clear. Please let me know if so!
I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.
I’d note that it’s possible for an organism to learn to behave (and think) in accordance with the “simple mathematical theory of agency” you’re talking about, without said theory being directly specified by the genome. If the theory of agency really is computationally simple, then many learning processes probably converge towards implementing something like that theory, simply as a result of being optimized to act coherently in an environment over time.
Well, how do you define “directly specified”? If human brains reliably converge towards a certain algorithm, then effectively this algorithm is specified by the genome. The real question is, which parts depends only on genes and which parts depend on the environment. My tentative opinion is that the majority is in the genes, since humans are, broadly speaking, pretty similar to each other. One environment effect is, feral humans grow up with serious mental problems. But, my guess is, this is not because of missing “values” or “biases”, but (to 1st approximation) because they lack the ability to think in language. Another contender for the environment-dependent part is cultural values. But even here, I suspect that humans just follow social incentives rather than acquire cultural values as an immutable part of their own utility function. I admit that it’s difficult to be sure about this.
I don’t classify “convergently learned” as an instance of “directly specified”, but rather “indirectly specified, in conjunction with the requisite environmental data.” Here’s an example. I think that humans’ reliably-learned edge detectors in V1 are not “directly specified”, in the same way that vision models don’t have directly specified curve detectors, but these detectors are convergently learned in order to do well on vision tasks.
If I say “sunk cost is directly specified”, I mean something like “the genome specifies neural circuitry which will eventually, in situations where sunk cost arises, fire so as to influence decision-making.” However, if, for example, the genome lays out the macrostructure of the connectome and the broad-scale learning process and some reward circuitry and regional learning hyperparameters and some other details, and then this brain eventually comes to implement a sunk-cost bias, I don’t call that “direct specification.”
I wish I had been more explicit about “direct specification”, and perhaps this comment is still not clear. Please let me know if so!
I think that “directly specified” is just an ill-defined concept. You can ask whether A specifies B using encoding C. But if you don’t fix C? Then any A can be said to “specify” any B (you can always put the information into C). Algorithmic information theory might come to the rescue by rephrasing the question as: “what is the relative Kolmogorov complexity K(B|A)?” Here, however, we have more ground to stand on, namely there is some function f:G×E→B where G is the space of genomes, E is the space of environments and B is the space of brains. Also we might be interested in a particular property of the brain, which we can think of as a function h:B→P, for example h might be something about values and/or biases. We can then ask e.g. how much mutual information is there between g∈G and h(g,e) vs. between e∈E and h(g,e). Or, we can ask what is more difficult: changing h(g,e) by changing g or by changing e. Where the amount of “difficulty” can be measured by e.g. what fraction of inputs produce the desired output.
So, there are certainly questions that can be asked about, what information comes from the genome and what information comes from the environment. I’m not sure whether this is what you’re going for, or you imagine some notion of information that comes from neither (but I have no idea what would that mean)? In any case, I think your thesis would benefit if you specified it more precisely. Given such a specification, it would be possible to assess the evidence more carefully.