“The ‘organism-as-agent’ idea is seen by some authors as a potential corrective to gene-centrism, emphasizing as it does the power of organisms to make choices, overcome challenges, modify their environment, and shape their own fate. ”
I don’t see much friction here. It seems perfectly coherent with gene-centrism that sometimes the most optimal thing for a gene to do in order to “advance its fitness” is to make an agentic organism competently pursuing some goals/acting on some drives that are beneficial for the “selfish” gene.
(Nor is it entirely clear, for that matter, whether they should, i.e. whether “long range consequentialism” points at anything real and coherent).
I agree that it may not point to anything real (in the sense of “realizable in our universe”), but I’d be curious to hear why you think it may not even be coherent and in what sense.
I wonder whether, by trying to formalize the fuzzy concept of agency (plausibly originating in an inductive bias that was selected for because of the need to model animals of one’s own and different species) that we already have, we are not shooting ourselves into the foot. A review of what relationships people see between “agency” and other salient concepts (autonomy, goals, rationality, intelligence, “internally originated” behavior, etc) is probably valuable just because of locally disentangling our semantic web and giving us more material to work with, but perhaps we should aim for finding more legible/clearly specified/measurable properties/processes in the world that we consider relevant and build our ontology bottom-up from them.
I don’t see much friction here. It seems perfectly coherent with gene-centrism that sometimes the most optimal thing for a gene to do in order to “advance its fitness” is to make an agentic organism competently pursuing some goals/acting on some drives that are beneficial for the “selfish” gene.
I agree that it may not point to anything real (in the sense of “realizable in our universe”), but I’d be curious to hear why you think it may not even be coherent and in what sense.
I wonder whether, by trying to formalize the fuzzy concept of agency (plausibly originating in an inductive bias that was selected for because of the need to model animals of one’s own and different species) that we already have, we are not shooting ourselves into the foot. A review of what relationships people see between “agency” and other salient concepts (autonomy, goals, rationality, intelligence, “internally originated” behavior, etc) is probably valuable just because of locally disentangling our semantic web and giving us more material to work with, but perhaps we should aim for finding more legible/clearly specified/measurable properties/processes in the world that we consider relevant and build our ontology bottom-up from them.