Then, if one runs into an ontological crisis, one can in principle re-generate their ontology by figuring out how to reason in terms of the new ontology in order to best fulfill their values.
I’ve found myself confused by how the process at the end of this sentence works. It seems like there’s some abstract “will this worldview lead to value fulfillment?” question being asked, even though the core values seem undefined during an ontological crisis! I agree that once you can regenerate the ontology once you have the core values redefined.
I don’t think that the real core values are affected during most ontological crises. I suspect that the real core values are things like feeling loved vs. despised, safe vs. threatened, competent vs. useless, etc. Crucially, what is optimized for is a feeling, not an external state.
Of course, the subsystems which compute where we feel on those axes need to take external data as input. I don’t have a very good model of how exactly they work, but I’m guessing that their internal models have to be kept relatively encapsulated from a lot of other knowledge, since it would be dangerous if it was easy to rationalize yourself into believing that you e.g. were loved when everyone was actually planning to kill you. My guess is that the computation of the feelings bootstraps from simple features in your sensory experience, such as an infant being innately driven to make their caregivers smile, and that simple pattern-detector of a smile then developing to an increasingly sophisticated model of what “being loved” means.
But I suspect that even the more developed versions of the pattern detectors are ultimately looking for patterns in your direct sensory data, such as detecting when a romantic partner does something that you’ve learned to associate with being loved.
It’s those patterns which cause particular subsystems to compute things like the feeling of being loved, and it’s those feelings that other subsystems treat as the core values to optimize for. Ontologies are generated so as to help you predict how to get more of those feelings, and most ontological crises don’t have an effect on how they are computed from the patterns, so most ontological crises don’t actually change your real core values. (One exception being if you manage to look at the functioning of your mind closely enough to directly challenge the implicit assumptions that the various subsystems are operating on. That can get nasty for a while.)
I really like this line of thinking.
I’ve found myself confused by how the process at the end of this sentence works. It seems like there’s some abstract “will this worldview lead to value fulfillment?” question being asked, even though the core values seem undefined during an ontological crisis! I agree that once you can regenerate the ontology once you have the core values redefined.
Thanks! I’ve really liked yours, too.
I don’t think that the real core values are affected during most ontological crises. I suspect that the real core values are things like feeling loved vs. despised, safe vs. threatened, competent vs. useless, etc. Crucially, what is optimized for is a feeling, not an external state.
Of course, the subsystems which compute where we feel on those axes need to take external data as input. I don’t have a very good model of how exactly they work, but I’m guessing that their internal models have to be kept relatively encapsulated from a lot of other knowledge, since it would be dangerous if it was easy to rationalize yourself into believing that you e.g. were loved when everyone was actually planning to kill you. My guess is that the computation of the feelings bootstraps from simple features in your sensory experience, such as an infant being innately driven to make their caregivers smile, and that simple pattern-detector of a smile then developing to an increasingly sophisticated model of what “being loved” means.
But I suspect that even the more developed versions of the pattern detectors are ultimately looking for patterns in your direct sensory data, such as detecting when a romantic partner does something that you’ve learned to associate with being loved.
It’s those patterns which cause particular subsystems to compute things like the feeling of being loved, and it’s those feelings that other subsystems treat as the core values to optimize for. Ontologies are generated so as to help you predict how to get more of those feelings, and most ontological crises don’t have an effect on how they are computed from the patterns, so most ontological crises don’t actually change your real core values. (One exception being if you manage to look at the functioning of your mind closely enough to directly challenge the implicit assumptions that the various subsystems are operating on. That can get nasty for a while.)