In my post on value systematization I used utilitarianism as a central example of value systematization.
Value systematization is important because it’s a process by which a small number of goals end up shaping a huge amount of behavior. But there’s another different way in which this happens: core emotional motivations formed during childhood (e.g. fear of death) often drive a huge amount of our behavior, in ways that are hard for us to notice.
Fear of death and utilitarianism are very different. The former is very visceral and deep-rooted; it typically influences our behavior via subtle channels that we don’t even consciously notice (because we suppress a lot of our fears). The latter is very abstract and cerebral, and it typically influences our behavior via allowing us to explicitly reason about which strategies to adopt.
But fear of death does seem like a kind of value systematization. Before we have a concept of death we experience a bunch of stuff which is scary for reasons we don’t understand. Then we learn about death, and then it seems like we systematize a lot of that scariness into “it’s bad because you might die”.
But it seems like this is happening way less consciously than systematization to become a utilitarian. So maybe we need to think about systematization happening separately in system 1 and system 2? Or maybe we should think about it as systematization happening repeatedly in “layers” over time, where earlier layers persist but are harder to access later on.
I feel pretty confused about this. But for now my mental model of the mind is as two (partially overlapping) inverted pyramids, one bottoming out in a handful of visceral motivations like “fear of death” and “avoid pain” and “find love”, and the other bottoming out in a handful of philosophical motivations like “be a good Christian” or “save the planet” or “make America great again” or “maximize utility”. The second (system 2) pyramid is trying to systematize the parts of system 1 that it can observe, but it can’t actually observe the deepest parts (or, when it does, it tries to oppose them), which creates conflict between the two systems.
So maybe we need to think about systematization happening separately in system 1 and system 2?
I think that’s right. Taking on the natural-abstraction lens, there is a “ground truth” to the “hierarchy of values”. That ground truth can be uncovered either by “manual”/symbolic/System-2 reasoning, or by “automatic”/gradient-descent-like/System-1 updates, and both processes would converge to the same hierarchy. But in the System-2 case, the hierarchy would be clearly visible to the conscious mind, whereas the System-1 route would make it visible only indirectly, by the impulses you feel.
I don’t know about the conflict thing, though. Why do you think System 2 would necessarily oppose System 1′s deepest motivations?
But fear of death does seem like a kind of value systematization
I don’t think it’s system 1 doing the systemization. Evolution beat fear of death into us in lots of independent forms (fear of heights, snakes, thirst, suffocation, etc.), but for the same underlying reason. Fear of death is not just an abstraction humans invented or acquired in childhood; is a “natural idea” pointed at by our brain’s innate circuitry from many directions. Utilitarianism doesn’t come with that scaffolding. We don’t learn to systematize Euclidian and Minkowskian spaces the same way either.
In my post on value systematization I used utilitarianism as a central example of value systematization.
Value systematization is important because it’s a process by which a small number of goals end up shaping a huge amount of behavior. But there’s another different way in which this happens: core emotional motivations formed during childhood (e.g. fear of death) often drive a huge amount of our behavior, in ways that are hard for us to notice.
Fear of death and utilitarianism are very different. The former is very visceral and deep-rooted; it typically influences our behavior via subtle channels that we don’t even consciously notice (because we suppress a lot of our fears). The latter is very abstract and cerebral, and it typically influences our behavior via allowing us to explicitly reason about which strategies to adopt.
But fear of death does seem like a kind of value systematization. Before we have a concept of death we experience a bunch of stuff which is scary for reasons we don’t understand. Then we learn about death, and then it seems like we systematize a lot of that scariness into “it’s bad because you might die”.
But it seems like this is happening way less consciously than systematization to become a utilitarian. So maybe we need to think about systematization happening separately in system 1 and system 2? Or maybe we should think about it as systematization happening repeatedly in “layers” over time, where earlier layers persist but are harder to access later on.
I feel pretty confused about this. But for now my mental model of the mind is as two (partially overlapping) inverted pyramids, one bottoming out in a handful of visceral motivations like “fear of death” and “avoid pain” and “find love”, and the other bottoming out in a handful of philosophical motivations like “be a good Christian” or “save the planet” or “make America great again” or “maximize utility”. The second (system 2) pyramid is trying to systematize the parts of system 1 that it can observe, but it can’t actually observe the deepest parts (or, when it does, it tries to oppose them), which creates conflict between the two systems.
I think that’s right. Taking on the natural-abstraction lens, there is a “ground truth” to the “hierarchy of values”. That ground truth can be uncovered either by “manual”/symbolic/System-2 reasoning, or by “automatic”/gradient-descent-like/System-1 updates, and both processes would converge to the same hierarchy. But in the System-2 case, the hierarchy would be clearly visible to the conscious mind, whereas the System-1 route would make it visible only indirectly, by the impulses you feel.
I don’t know about the conflict thing, though. Why do you think System 2 would necessarily oppose System 1′s deepest motivations?
Reminds me of Maslow’s pyramid.
I made an article about values, saying the supreme value is life and every other value derives from it.
Watch out, this most probably does not align with your view at first glance:
https://www.lesswrong.com/posts/xx3St4KC3KHHPGfL9/human-alignment
I don’t think it’s system 1 doing the systemization. Evolution beat fear of death into us in lots of independent forms (fear of heights, snakes, thirst, suffocation, etc.), but for the same underlying reason. Fear of death is not just an abstraction humans invented or acquired in childhood; is a “natural idea” pointed at by our brain’s innate circuitry from many directions. Utilitarianism doesn’t come with that scaffolding. We don’t learn to systematize Euclidian and Minkowskian spaces the same way either.