To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model. We currently have little idea how to accomplish this, and instead what we do in practice is, as far as I can tell, keep our ontologies intact and utility functions unchanged, but just add some new heuristics that in certain limited circumstances call out to new physics formulas to better update/extrapolate our models.
Define “preferences” to refer broadly to a set that includes an individual’s preferences, values, goals, and morals. During an individual’s childhood and early adulthood, their ontology and preferences co-evolve. Evolution seeks to maximize fitness, so the preference acquisition process is biased in such a way that the preferences we pick up maximize our ability to survive and have surviving offspring. For example, if hunting is considered high-status in our tribe and we display a talent for hunting, we’ll probably pick up a preference for being a hunter. Our circle of altruism gets calibrated to cover those who are considered part of our tribe, and so on. This has the ordinary caveat that the preference acquisition process should be expected to be optimized for the EEA, not the modern world.
There is an exploration/exploitation tradeoff here, and the environment in the EEA probably didn’t change that radically, so as time goes by this process slows down and major changes to our preferences become less and less likely. Because our preferences were acquired via a process aiming to maximize our fit to our particular environment, they are intimately tied together with our ontology. As our neurology shifts closer towards the exploitation phase and our preferences become less amenable to change, we become more likely to add new heuristics to our utility functions rather than to properly revise them when our ontology changes.
This is part of the reason for generational conflict, because as children are raised in a different environment and taught a different view of their world, their preferences become grounded in a new kind of ontology that’s different from the one the preferences of their parents came from. It also suggests that the preferences of any humans still alive today might to some extent be simply impossible to reconcile with those of a sufficiently advanced future—though the children (or robots) born and raised within that future will have no such problem. Which, of course, is just as it has always been.
I feel like the part about altruism doesn’t match my observations very well. First, on a theoretical level, it seems like exploration is nearly costless here. It merely consists of retaining some flexibility, and does not inhibit exploitation in any practical sense, so I’m not sure there’s any strong advantage for stopping it (although there might also not have been much of an advantage in retaining it before modern times either). More concretely, it seems like we have empirical evidence to measure this hypothesis by, as many people in the modern world switch “tribes” because of moving long distances, switching jobs, or significantly altering their social standing.
From what I’ve seen, when such switches occur, many of the people who were in the old circle of altruism are promptly forgotten (with the exception of those with whom reputation has been built up particularly highly), and a new circle forms to encompass the relevant people in the new community. There is, admittedly, a different case when a person moves to a different culture. Then, it seems that while the circle of altruism might partially shift, persons from the original culture are still favored strongly by the person (even if she did not know them before).
(The non-altruism parts seem likely enough, though. At the risk of really badly abusing evpsych, we might theorize that people sometimes moved to nearby tribes, which had similar cultures, but almost never to distant tribes, which did not.)
Here’s a hypothesis (warning for armchair evpsych)...
Define “preferences” to refer broadly to a set that includes an individual’s preferences, values, goals, and morals. During an individual’s childhood and early adulthood, their ontology and preferences co-evolve. Evolution seeks to maximize fitness, so the preference acquisition process is biased in such a way that the preferences we pick up maximize our ability to survive and have surviving offspring. For example, if hunting is considered high-status in our tribe and we display a talent for hunting, we’ll probably pick up a preference for being a hunter. Our circle of altruism gets calibrated to cover those who are considered part of our tribe, and so on. This has the ordinary caveat that the preference acquisition process should be expected to be optimized for the EEA, not the modern world.
There is an exploration/exploitation tradeoff here, and the environment in the EEA probably didn’t change that radically, so as time goes by this process slows down and major changes to our preferences become less and less likely. Because our preferences were acquired via a process aiming to maximize our fit to our particular environment, they are intimately tied together with our ontology. As our neurology shifts closer towards the exploitation phase and our preferences become less amenable to change, we become more likely to add new heuristics to our utility functions rather than to properly revise them when our ontology changes.
This is part of the reason for generational conflict, because as children are raised in a different environment and taught a different view of their world, their preferences become grounded in a new kind of ontology that’s different from the one the preferences of their parents came from. It also suggests that the preferences of any humans still alive today might to some extent be simply impossible to reconcile with those of a sufficiently advanced future—though the children (or robots) born and raised within that future will have no such problem. Which, of course, is just as it has always been.
I feel like the part about altruism doesn’t match my observations very well. First, on a theoretical level, it seems like exploration is nearly costless here. It merely consists of retaining some flexibility, and does not inhibit exploitation in any practical sense, so I’m not sure there’s any strong advantage for stopping it (although there might also not have been much of an advantage in retaining it before modern times either). More concretely, it seems like we have empirical evidence to measure this hypothesis by, as many people in the modern world switch “tribes” because of moving long distances, switching jobs, or significantly altering their social standing.
From what I’ve seen, when such switches occur, many of the people who were in the old circle of altruism are promptly forgotten (with the exception of those with whom reputation has been built up particularly highly), and a new circle forms to encompass the relevant people in the new community. There is, admittedly, a different case when a person moves to a different culture. Then, it seems that while the circle of altruism might partially shift, persons from the original culture are still favored strongly by the person (even if she did not know them before).
(The non-altruism parts seem likely enough, though. At the risk of really badly abusing evpsych, we might theorize that people sometimes moved to nearby tribes, which had similar cultures, but almost never to distant tribes, which did not.)
Yes, that sound plausible.