The fact that different cultures have different concepts of death, or that it splinters away from the things it was needed for in the ancestral environment, doesn’t seem to contradict my claim. What matters is not that the ideas are entirely the same from person to person, but rather that the concept has the kinds of essential properties that mattered in the ancestral environment. For instance, as long as your concept of death you pick out can predict that killing a lion makes it no longer able to kill you, that dying means disempowerment, etc, it doesn’t matter if you also believe ghosts exist, as long as your ghost belief isn’t so strong that it makes you not mind being killed by a lion.
I think these core properties are conserved across cultures. Grab two people from extremely different cultures and they can agree that people eventually die, and if you die your ability to influence the world is sharply diminished. (Even people who believe in ghosts have to begrudgingly accept that ghosts have a much harder time filing their taxes.) I don’t think this splintering contradicts my theory at all. You’re selecting out the concept in the brain that best fits these constraints, and maybe in one brain that comes with ghosts and in another it doesn’t.
To be fully clear, I’m not positing the existence of some kind of globally universal concept of death or whatever that is shared by everyone, or that concepts in brains are stored at fixed “neural addresses”. The entire point of doing ELK/ontology identification is to pick out the thing that best corresponds to some particular concept in a wide variety of different minds. This also allows for splintering outside the region where the concept is well defined.
I concede that fear of death could be downstream of other fears rather than encoded. However, I still think it’s wrong to believe that this isn’t possible in principle, and I think these other fears/motivations (wanting to achieve values, fear of , etc) are still pretty abstract, and there’s a good chance of some of those things being anchored directly into the genome using a similar mechanism to what I described.
I don’t get how the case of morality existing in blind people relates. Sure, it could affect the distribution somewhat. That still shouldn’t break extensional specification. I’m worried that maybe your model of my beliefs looks like the genome encoding some kind of fixed neural address thing, or a perfectly death-shaped hole that accepts concepts that exactly fit the mold of Standardized Death Concept, and breaks whenever given a slightly misshapen death concept. That’s not at all what I’m pointing at.
I feel similarly about the quantum physics or neuroscience cases. My theory doesn’t predict that your morality collapses when you learn about quantum physics! Your morality is defined by extensional specification (possibly indirectly, the genome probably doesn’t directly encode many examples of what’s right and wrong), and within any new ontology you use your extensional specification to figure out which things are moral. Sometimes this is smooth, when you make small localized changes to your ontology. Sometimes you will experience an ontological crisis—empirically, it seems many people experience some kind of crisis of morality when concepts like free will get called into question due to quantum mechanics for instance, and then you inspect lots of examples of things you’re confident about and then try to find something in the new ontology that stretches to cover all of those cases (which is extensional reasoning). None of this contradicts the idea that morality, or rather its many constituent heuristics built on high level abstractions, can be defined extensionally in the genome.
The fact that different cultures have different concepts of death, or that it splinters away from the things it was needed for in the ancestral environment, doesn’t seem to contradict my claim. What matters is not that the ideas are entirely the same from person to person, but rather that the concept has the kinds of essential properties that mattered in the ancestral environment. For instance, as long as your concept of death you pick out can predict that killing a lion makes it no longer able to kill you, that dying means disempowerment, etc, it doesn’t matter if you also believe ghosts exist, as long as your ghost belief isn’t so strong that it makes you not mind being killed by a lion.
I think these core properties are conserved across cultures. Grab two people from extremely different cultures and they can agree that people eventually die, and if you die your ability to influence the world is sharply diminished. (Even people who believe in ghosts have to begrudgingly accept that ghosts have a much harder time filing their taxes.) I don’t think this splintering contradicts my theory at all. You’re selecting out the concept in the brain that best fits these constraints, and maybe in one brain that comes with ghosts and in another it doesn’t.
To be fully clear, I’m not positing the existence of some kind of globally universal concept of death or whatever that is shared by everyone, or that concepts in brains are stored at fixed “neural addresses”. The entire point of doing ELK/ontology identification is to pick out the thing that best corresponds to some particular concept in a wide variety of different minds. This also allows for splintering outside the region where the concept is well defined.
I concede that fear of death could be downstream of other fears rather than encoded. However, I still think it’s wrong to believe that this isn’t possible in principle, and I think these other fears/motivations (wanting to achieve values, fear of , etc) are still pretty abstract, and there’s a good chance of some of those things being anchored directly into the genome using a similar mechanism to what I described.
I don’t get how the case of morality existing in blind people relates. Sure, it could affect the distribution somewhat. That still shouldn’t break extensional specification. I’m worried that maybe your model of my beliefs looks like the genome encoding some kind of fixed neural address thing, or a perfectly death-shaped hole that accepts concepts that exactly fit the mold of Standardized Death Concept, and breaks whenever given a slightly misshapen death concept. That’s not at all what I’m pointing at.
I feel similarly about the quantum physics or neuroscience cases. My theory doesn’t predict that your morality collapses when you learn about quantum physics! Your morality is defined by extensional specification (possibly indirectly, the genome probably doesn’t directly encode many examples of what’s right and wrong), and within any new ontology you use your extensional specification to figure out which things are moral. Sometimes this is smooth, when you make small localized changes to your ontology. Sometimes you will experience an ontological crisis—empirically, it seems many people experience some kind of crisis of morality when concepts like free will get called into question due to quantum mechanics for instance, and then you inspect lots of examples of things you’re confident about and then try to find something in the new ontology that stretches to cover all of those cases (which is extensional reasoning). None of this contradicts the idea that morality, or rather its many constituent heuristics built on high level abstractions, can be defined extensionally in the genome.