I don’t think that defining things “extensively” in this manner works for any even moderately abstract concepts. I think that human concepts are far too varied for this to work. E.g., different cultures can have very different notions of death. I also think that the evidence from children points in the other direction. Children often have to be told that death is bad, that it’s not just a long sleep or that the dead person / entity hasn’t just gone away somewhere far off. I think that, if aversion to death were hard coded, we’d see children quickly gain an aversion to death as soon as they discovered the concept.
I also think you can fully explain the convergent aversion to death simply by the fact that death is obviously bad relative to your other values. E.g., I’d be quite averse to having by arm turn into a ballon animal, but that’s not because it was evolutionarily hard-coded into me. I can just roll out the consequences of that change and see that they’re bad.
I’d also note that human abstractions vary quite a lot, but having different abstractions doesn’t seem to particularly affect humans’ levels of morality / caring about each other. E.g., blind people don’t have any visual abstractions, but are not thereby morally deficient in any way. Note that blindness means that the entire visual cortex is no longer dedicated to vision, and can be repurposed for other tasks. This “additional hardware” seems like it should somewhat affect which distribution of abstractions are optimal (since the constraints on the non-visual tasks have changed). And yet, values seem quite unaffected by that.
Similarly, learning about quantum physics, evolution, neuroscience, and the like doesn’t then cause your morality to collapse. In fact, the abstractions that are most likely to affect a human’s morality, such as religion, political ideology and the like, do not seem very predicatively performant.
The fact that different cultures have different concepts of death, or that it splinters away from the things it was needed for in the ancestral environment, doesn’t seem to contradict my claim. What matters is not that the ideas are entirely the same from person to person, but rather that the concept has the kinds of essential properties that mattered in the ancestral environment. For instance, as long as your concept of death you pick out can predict that killing a lion makes it no longer able to kill you, that dying means disempowerment, etc, it doesn’t matter if you also believe ghosts exist, as long as your ghost belief isn’t so strong that it makes you not mind being killed by a lion.
I think these core properties are conserved across cultures. Grab two people from extremely different cultures and they can agree that people eventually die, and if you die your ability to influence the world is sharply diminished. (Even people who believe in ghosts have to begrudgingly accept that ghosts have a much harder time filing their taxes.) I don’t think this splintering contradicts my theory at all. You’re selecting out the concept in the brain that best fits these constraints, and maybe in one brain that comes with ghosts and in another it doesn’t.
To be fully clear, I’m not positing the existence of some kind of globally universal concept of death or whatever that is shared by everyone, or that concepts in brains are stored at fixed “neural addresses”. The entire point of doing ELK/ontology identification is to pick out the thing that best corresponds to some particular concept in a wide variety of different minds. This also allows for splintering outside the region where the concept is well defined.
I concede that fear of death could be downstream of other fears rather than encoded. However, I still think it’s wrong to believe that this isn’t possible in principle, and I think these other fears/motivations (wanting to achieve values, fear of , etc) are still pretty abstract, and there’s a good chance of some of those things being anchored directly into the genome using a similar mechanism to what I described.
I don’t get how the case of morality existing in blind people relates. Sure, it could affect the distribution somewhat. That still shouldn’t break extensional specification. I’m worried that maybe your model of my beliefs looks like the genome encoding some kind of fixed neural address thing, or a perfectly death-shaped hole that accepts concepts that exactly fit the mold of Standardized Death Concept, and breaks whenever given a slightly misshapen death concept. That’s not at all what I’m pointing at.
I feel similarly about the quantum physics or neuroscience cases. My theory doesn’t predict that your morality collapses when you learn about quantum physics! Your morality is defined by extensional specification (possibly indirectly, the genome probably doesn’t directly encode many examples of what’s right and wrong), and within any new ontology you use your extensional specification to figure out which things are moral. Sometimes this is smooth, when you make small localized changes to your ontology. Sometimes you will experience an ontological crisis—empirically, it seems many people experience some kind of crisis of morality when concepts like free will get called into question due to quantum mechanics for instance, and then you inspect lots of examples of things you’re confident about and then try to find something in the new ontology that stretches to cover all of those cases (which is extensional reasoning). None of this contradicts the idea that morality, or rather its many constituent heuristics built on high level abstractions, can be defined extensionally in the genome.
I don’t think that defining things “extensively” in this manner works for any even moderately abstract concepts. I think that human concepts are far too varied for this to work. E.g., different cultures can have very different notions of death. I also think that the evidence from children points in the other direction. Children often have to be told that death is bad, that it’s not just a long sleep or that the dead person / entity hasn’t just gone away somewhere far off. I think that, if aversion to death were hard coded, we’d see children quickly gain an aversion to death as soon as they discovered the concept.
I also think you can fully explain the convergent aversion to death simply by the fact that death is obviously bad relative to your other values. E.g., I’d be quite averse to having by arm turn into a ballon animal, but that’s not because it was evolutionarily hard-coded into me. I can just roll out the consequences of that change and see that they’re bad.
I’d also note that human abstractions vary quite a lot, but having different abstractions doesn’t seem to particularly affect humans’ levels of morality / caring about each other. E.g., blind people don’t have any visual abstractions, but are not thereby morally deficient in any way. Note that blindness means that the entire visual cortex is no longer dedicated to vision, and can be repurposed for other tasks. This “additional hardware” seems like it should somewhat affect which distribution of abstractions are optimal (since the constraints on the non-visual tasks have changed). And yet, values seem quite unaffected by that.
Similarly, learning about quantum physics, evolution, neuroscience, and the like doesn’t then cause your morality to collapse. In fact, the abstractions that are most likely to affect a human’s morality, such as religion, political ideology and the like, do not seem very predicatively performant.
The fact that different cultures have different concepts of death, or that it splinters away from the things it was needed for in the ancestral environment, doesn’t seem to contradict my claim. What matters is not that the ideas are entirely the same from person to person, but rather that the concept has the kinds of essential properties that mattered in the ancestral environment. For instance, as long as your concept of death you pick out can predict that killing a lion makes it no longer able to kill you, that dying means disempowerment, etc, it doesn’t matter if you also believe ghosts exist, as long as your ghost belief isn’t so strong that it makes you not mind being killed by a lion.
I think these core properties are conserved across cultures. Grab two people from extremely different cultures and they can agree that people eventually die, and if you die your ability to influence the world is sharply diminished. (Even people who believe in ghosts have to begrudgingly accept that ghosts have a much harder time filing their taxes.) I don’t think this splintering contradicts my theory at all. You’re selecting out the concept in the brain that best fits these constraints, and maybe in one brain that comes with ghosts and in another it doesn’t.
To be fully clear, I’m not positing the existence of some kind of globally universal concept of death or whatever that is shared by everyone, or that concepts in brains are stored at fixed “neural addresses”. The entire point of doing ELK/ontology identification is to pick out the thing that best corresponds to some particular concept in a wide variety of different minds. This also allows for splintering outside the region where the concept is well defined.
I concede that fear of death could be downstream of other fears rather than encoded. However, I still think it’s wrong to believe that this isn’t possible in principle, and I think these other fears/motivations (wanting to achieve values, fear of , etc) are still pretty abstract, and there’s a good chance of some of those things being anchored directly into the genome using a similar mechanism to what I described.
I don’t get how the case of morality existing in blind people relates. Sure, it could affect the distribution somewhat. That still shouldn’t break extensional specification. I’m worried that maybe your model of my beliefs looks like the genome encoding some kind of fixed neural address thing, or a perfectly death-shaped hole that accepts concepts that exactly fit the mold of Standardized Death Concept, and breaks whenever given a slightly misshapen death concept. That’s not at all what I’m pointing at.
I feel similarly about the quantum physics or neuroscience cases. My theory doesn’t predict that your morality collapses when you learn about quantum physics! Your morality is defined by extensional specification (possibly indirectly, the genome probably doesn’t directly encode many examples of what’s right and wrong), and within any new ontology you use your extensional specification to figure out which things are moral. Sometimes this is smooth, when you make small localized changes to your ontology. Sometimes you will experience an ontological crisis—empirically, it seems many people experience some kind of crisis of morality when concepts like free will get called into question due to quantum mechanics for instance, and then you inspect lots of examples of things you’re confident about and then try to find something in the new ontology that stretches to cover all of those cases (which is extensional reasoning). None of this contradicts the idea that morality, or rather its many constituent heuristics built on high level abstractions, can be defined extensionally in the genome.