This all just seems like what Richard Dawkins called an ‘argument from failure of imagination’.
I’m saying “Either the genome can hardcode death fear, which would have huge alignment implications, or it can’t, which would have huge alignment implications, or it can hardcode death fear but only via advantages it had but we won’t, which doesn’t have huge implications.” Of the three, I think the second is most likely—that you can’t just read off an a priori unknown data structure, and figure out where death is computed inside of that. If there were less hard and complex ways to get an organism to fear death, I expect evolution to have found those instead.
See also clues from ethology, where even juicy candidates-for-hardcoding ended up not being hardcoded.
TurnTrout—I think the ‘either/or’ framing here is misleading about the way that genomes can adapt to maximize survival and minimize death.
For example, jumping spiders have evolved special secondary eyes pointing backwards that specifically detect predators from behind that might want to eat them. At the functional level of minimizing death, these eyes ‘hardcode death-fear’ in a very real and morphological way. Similarly, many animals vulnerable to predators evolve eye locations on the sides of their heads, to maximize degrees of visual coverage they can see. Prey animals also evolve pupils adapted to scanning the horizon for predators, i.e. for death-risks; the morphology of their visual systems itself ‘encodes’ fear of death from predators.
More generally, any complex adaptations that humans have evolved to avoid starvation, infection, predation, aggression, etc can be analyzed as ‘encoding a fear of death’, and can be analyzed functionally in terms of risk sensitivity, loss aversion, Bayesian priors about the most dangerous organisms and events in the environment, etc. There are thousands of papers in animal behavior that do this kind of functional analysis—including in anti-predator strategies, anti-pathogen defenses, evolutionary immunology, optimal foraging theory, food choice, intrasexual aggression, etc. This stuff is the bread and butter of behavioral biology.
So, if this strategy of evolutionary-functional analysis of death-avoidance adaptations has worked so well in thousands of other species, I don’t see why it should be considered ‘impossible in principle’ for humans, based on some theoretical arguments about how genomes can’t read off neural locations for ‘death-detecting cells’ from the adult brain.
The key point, again, is that genomes never need to ‘read off’ details of adult neural circuitry; they just need to orchestrate brain development—in conjunction with ancestrally typical, cross-generationally recurring features of their environments—that will reliably result in psychological adaptations that represent important life values and solve important life problems.
I don’t see why it should be considered ‘impossible in principle’ for humans, based on some theoretical arguments about how genomes can’t read off neural locations for ‘death-detecting cells’ from the adult brain.
People are indeed effectively optimized by evolution to do behavior X in situation Y (e.g. be afraid when death seems probable). I think evolution did that a lot. I think people are quite optimized by evolution in the usual behavioral biological ways you described.
I’m rather saying that the genome can’t e.g. specify a neural circuit which fires if and only if a person is thinking about death. I’m saying that most biases are probably not explicit adaptations, that evolution cannot directly select for certain high-level cognitive properties (and only those properties), eg “level of risk aversion” or “behavior follows discounting scheme X” or “vulnerability to the framing effect.” But evolution absolutely can and did select genotypes to unfold into minds which tend to be shaped in the form “cares more about ingroup.”
I think we’re coming at this issue from different angles—I’m taking a very evolutionary-functional view focused on what selection pressures shape psychological adaptations, what environmental information those adaptations need to track (e.g. snake! or pathogen!), what they need to represent about the world (e.g. imminent danger of death from threat X!), and what behaviors they need to trigger (e.g. run away!).
From that evolutionary-functional view, the ‘high-level cognitive properties’ of ‘fitness affordances’ are the main things that matter to evolved agents, and the lower-level details of what genes are involved, what specific neural circuits are needed, or what specific sensory inputs are relevant, just don’t matter very much—as long as there’s some way for evolution to shape the relevant psychological adaptations.
And the fact that animals do reliably evolve to track the key fitness affordances in their environments (e.g. predators, prey, mates, offspring, kin, herds, dangers) suggests that the specifics of neurogenetic development don’t in fact impose much of a constraint on psychological evolution.
It seems like you’re coming at the issue from more of a mechanistic, bottom-up perspective that focuses on the mapping from genes to neural circuits. Which is fine, and can be helpful. But I would just be very wary about using neurogenetic arguments to make overly strong claims about what evolution can or can’t do in terms of crafting complex psychological adaptations.
Seems like we broadly agree on most points here, AFAICT. Thanks again for your engagement. :)
the fact that animals do reliably evolve to track the key fitness affordances in their environments (e.g. predators, prey, mates, offspring, kin, herds, dangers) suggests that the specifics of neurogenetic development don’t in fact impose much of a constraint on psychological evolution.
This evidence shows that evolution is somehow able to adapt to relevant affordances, but doesn’t (to my eye) discriminate strongly between the influence being mediated by selection on high-level cognitive properties.
For example, how strongly do these observations discriminate between worlds where evolution was or wasn’t constrained by having or not having the ability to directly select adaptations over high-level cognitive properties (like “afraid of death in the abstract”)? Would we notice the difference between those worlds? What amount of affordance-tailoring would we expect in worlds where evolution was able to perform such selection, compared to worlds where it wasn’t?
It seems to me that we wouldn’t notice the difference. There are many dimensions of affordance-tailoring, and it’s harder to see affordances that weren’t successfully selected for.
For a totally made up and naive but illustrative example, if adult frogs reliably generalize to model that a certain kind of undercurrent is dangerous (ie leads to predicted-death), but that undercurrent doesn’t leave sensory-definable signs, evolution might not have been able to select frogs to avoid that particular kind of undercurrent, even though the frogs model the undercurrent in their world model. If the undercurrent decreases fitness by enough, perhaps frogs are selected to be averse towards necessary conditions for waters having those undercurrents—maybe those are sensory-definable (or otherwise definable in terms of eg cortisol predictions).
But we might just see a frog which is selected for a huge range of other affordances, and not consider that evolution failed with the undercurrent-affordance. (The important point here doesn’t have to do with frogs, and I expect it to stand even if the example is biologically naive.)
I’m saying “Either the genome can hardcode death fear, which would have huge alignment implications, or it can’t, which would have huge alignment implications, or it can hardcode death fear but only via advantages it had but we won’t, which doesn’t have huge implications.” Of the three, I think the second is most likely—that you can’t just read off an a priori unknown data structure, and figure out where death is computed inside of that. If there were less hard and complex ways to get an organism to fear death, I expect evolution to have found those instead.
See also clues from ethology, where even juicy candidates-for-hardcoding ended up not being hardcoded.
TurnTrout—I think the ‘either/or’ framing here is misleading about the way that genomes can adapt to maximize survival and minimize death.
For example, jumping spiders have evolved special secondary eyes pointing backwards that specifically detect predators from behind that might want to eat them. At the functional level of minimizing death, these eyes ‘hardcode death-fear’ in a very real and morphological way. Similarly, many animals vulnerable to predators evolve eye locations on the sides of their heads, to maximize degrees of visual coverage they can see. Prey animals also evolve pupils adapted to scanning the horizon for predators, i.e. for death-risks; the morphology of their visual systems itself ‘encodes’ fear of death from predators.
More generally, any complex adaptations that humans have evolved to avoid starvation, infection, predation, aggression, etc can be analyzed as ‘encoding a fear of death’, and can be analyzed functionally in terms of risk sensitivity, loss aversion, Bayesian priors about the most dangerous organisms and events in the environment, etc. There are thousands of papers in animal behavior that do this kind of functional analysis—including in anti-predator strategies, anti-pathogen defenses, evolutionary immunology, optimal foraging theory, food choice, intrasexual aggression, etc. This stuff is the bread and butter of behavioral biology.
So, if this strategy of evolutionary-functional analysis of death-avoidance adaptations has worked so well in thousands of other species, I don’t see why it should be considered ‘impossible in principle’ for humans, based on some theoretical arguments about how genomes can’t read off neural locations for ‘death-detecting cells’ from the adult brain.
The key point, again, is that genomes never need to ‘read off’ details of adult neural circuitry; they just need to orchestrate brain development—in conjunction with ancestrally typical, cross-generationally recurring features of their environments—that will reliably result in psychological adaptations that represent important life values and solve important life problems.
People are indeed effectively optimized by evolution to do behavior X in situation Y (e.g. be afraid when death seems probable). I think evolution did that a lot. I think people are quite optimized by evolution in the usual behavioral biological ways you described.
I’m rather saying that the genome can’t e.g. specify a neural circuit which fires if and only if a person is thinking about death. I’m saying that most biases are probably not explicit adaptations, that evolution cannot directly select for certain high-level cognitive properties (and only those properties), eg “level of risk aversion” or “behavior follows discounting scheme X” or “vulnerability to the framing effect.” But evolution absolutely can and did select genotypes to unfold into minds which tend to be shaped in the form “cares more about ingroup.”
Hopefully this comment clarifies my views some?
That’s somewhat helpful.
I think we’re coming at this issue from different angles—I’m taking a very evolutionary-functional view focused on what selection pressures shape psychological adaptations, what environmental information those adaptations need to track (e.g. snake! or pathogen!), what they need to represent about the world (e.g. imminent danger of death from threat X!), and what behaviors they need to trigger (e.g. run away!).
From that evolutionary-functional view, the ‘high-level cognitive properties’ of ‘fitness affordances’ are the main things that matter to evolved agents, and the lower-level details of what genes are involved, what specific neural circuits are needed, or what specific sensory inputs are relevant, just don’t matter very much—as long as there’s some way for evolution to shape the relevant psychological adaptations.
And the fact that animals do reliably evolve to track the key fitness affordances in their environments (e.g. predators, prey, mates, offspring, kin, herds, dangers) suggests that the specifics of neurogenetic development don’t in fact impose much of a constraint on psychological evolution.
It seems like you’re coming at the issue from more of a mechanistic, bottom-up perspective that focuses on the mapping from genes to neural circuits. Which is fine, and can be helpful. But I would just be very wary about using neurogenetic arguments to make overly strong claims about what evolution can or can’t do in terms of crafting complex psychological adaptations.
Seems like we broadly agree on most points here, AFAICT. Thanks again for your engagement. :)
This evidence shows that evolution is somehow able to adapt to relevant affordances, but doesn’t (to my eye) discriminate strongly between the influence being mediated by selection on high-level cognitive properties.
For example, how strongly do these observations discriminate between worlds where evolution was or wasn’t constrained by having or not having the ability to directly select adaptations over high-level cognitive properties (like “afraid of death in the abstract”)? Would we notice the difference between those worlds? What amount of affordance-tailoring would we expect in worlds where evolution was able to perform such selection, compared to worlds where it wasn’t?
It seems to me that we wouldn’t notice the difference. There are many dimensions of affordance-tailoring, and it’s harder to see affordances that weren’t successfully selected for.
For a totally made up and naive but illustrative example, if adult frogs reliably generalize to model that a certain kind of undercurrent is dangerous (ie leads to predicted-death), but that undercurrent doesn’t leave sensory-definable signs, evolution might not have been able to select frogs to avoid that particular kind of undercurrent, even though the frogs model the undercurrent in their world model. If the undercurrent decreases fitness by enough, perhaps frogs are selected to be averse towards necessary conditions for waters having those undercurrents—maybe those are sensory-definable (or otherwise definable in terms of eg cortisol predictions).
But we might just see a frog which is selected for a huge range of other affordances, and not consider that evolution failed with the undercurrent-affordance. (The important point here doesn’t have to do with frogs, and I expect it to stand even if the example is biologically naive.)