Really appreciated this post and I’m especially excited for post 13 now! In the past month or two, I’ve been thinking about stuff like “I crave chocolate” and “I should abstain from eating chocolate” as being a result of two independent value systems (one whose policy was shaped by evolutionary pressure and one whose policy is… idk vaguely “higher order” stuff where you will endure higher states of cortisol to contribute to society or something).
I’m starting to lean away from this a little bit, and I think reading this post gave me a good idea of what your thoughts are, but it’d be really nice to get confirmation (and maybe clarification). Let me know if I should just wait for post 13. My prediction is that you believe there is a single (not dual) generator of human values, which are essentially moderated at the neurochemical level, like “level of dopamine/serotonin/cortisol”. And yet, this same generator, due to our sufficiently complex “thought generator”, can produce plans and thoughts such as “I should abstain from eating chocolate” even though it would be a dopamine hit in the short-term, because it can simulate forward much further down the timeline, and believes that the overall neurochemical feedback will be better than caving into eating chocolate, on a longer time horizon. Is this correct?
If so, do you believe that because social/multi-agent navigation was essential to human evolution, the policy was heavily shaped by social world related pressures, which means that even when you abstain from the chocolate, or endure pain and suffering for a “heroic” act, in the end, this can all still be attributed to the same system/generator that also sometimes has you eat sugary but unhealthy foods?
Given my angle on attempting to contribute to AI Alignment is doing stuff to better elucidate what “human values” even is, I feel like I should try to resolve the competing ideas I’ve absorbed from LessWrong: 2 distinct value systems vs. singular generator of values. This post was a big step for me in understanding how the latter idea can be coherent with the apparent contradictions between hedonistic and higher-level values.
Right, I think there’s one reward function (well, one reward function that’s relevant for this discussion), and that for every thought we think, we’re thinking it because it’s rewarding to do so—or at least, more rewarding than alternative thoughts. Sometimes a thought is rewarding because it involves feeling good now, sometimes it’s rewarding because it involves an expectation of feeling good in the distant future, sometimes it’s rewarding because it involves an expectation that it will make your beloved friend feel good, sometimes it’s rewarding because it involves an expectation that it will make your admired in-group members very impressed with you, etc.
I think that the thing that gets rewarded is thoughts / plans, not just actions / states. So we don’t have to assume that the Thought Generator is proposing an action that’s unrewarding now (going to the gym) in order to get into a more-rewarding state later on (being ripped). Instead, the Thought Generator can generate one thought right now, “I’m gonna go to the gym so that I can get ripped”. That one thought can be rewarding right now, because the “…so that I can get ripped” is right there in the thought, providing evidence to the brainstem that the thought should be rewarded, and that evidence can plausibly outweigh the countervailing evidence from the “I’m gonna go to the gym…” part of the thought.
I do think there’s still an adjustable parameter in the brain related to time-discounting, even if the details are kinda different than in normal RL. But I don’t see a strong connection between that and social instincts. For example, if you abstain from ice cream to avoid a stomach ache, that’s a time-discounting thing, but it’s not a social-instincts thing. It’s possible that social animals in general are genetically wired to time-discount less than non-social animals, but I don’t have any particular reason to expect that to be the case. Or, maybe humans in particular are genetically wired to time-discount less than other animals, I don’t know, but if that’s true, I still wouldn’t expect that it has to do with humans being social; rather I would assume that it evolved because humans are smarter, and therefore human plans are unusually likely to work out as predicted, compared to other animals.
I think social instincts come from having things in the innate reward function that track “having high status in my in-group” and “being treated fairly” and “getting revenge” and so on. (To a first approximation.) Post #13(ish) will be a (hopefully) improved and updated version of this discussion of how such things might get actually incorporated into the reward function, given the difficulties related to symbol-grounding. You might also be interested in my post (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations).
Hope this helps, happy to talk more, either here or by phone if you think that would help. :)
Really appreciated this post and I’m especially excited for post 13 now! In the past month or two, I’ve been thinking about stuff like “I crave chocolate” and “I should abstain from eating chocolate” as being a result of two independent value systems (one whose policy was shaped by evolutionary pressure and one whose policy is… idk vaguely “higher order” stuff where you will endure higher states of cortisol to contribute to society or something).
I’m starting to lean away from this a little bit, and I think reading this post gave me a good idea of what your thoughts are, but it’d be really nice to get confirmation (and maybe clarification). Let me know if I should just wait for post 13. My prediction is that you believe there is a single (not dual) generator of human values, which are essentially moderated at the neurochemical level, like “level of dopamine/serotonin/cortisol”. And yet, this same generator, due to our sufficiently complex “thought generator”, can produce plans and thoughts such as “I should abstain from eating chocolate” even though it would be a dopamine hit in the short-term, because it can simulate forward much further down the timeline, and believes that the overall neurochemical feedback will be better than caving into eating chocolate, on a longer time horizon. Is this correct?
If so, do you believe that because social/multi-agent navigation was essential to human evolution, the policy was heavily shaped by social world related pressures, which means that even when you abstain from the chocolate, or endure pain and suffering for a “heroic” act, in the end, this can all still be attributed to the same system/generator that also sometimes has you eat sugary but unhealthy foods?
Given my angle on attempting to contribute to AI Alignment is doing stuff to better elucidate what “human values” even is, I feel like I should try to resolve the competing ideas I’ve absorbed from LessWrong: 2 distinct value systems vs. singular generator of values. This post was a big step for me in understanding how the latter idea can be coherent with the apparent contradictions between hedonistic and higher-level values.
Thanks!
Right, I think there’s one reward function (well, one reward function that’s relevant for this discussion), and that for every thought we think, we’re thinking it because it’s rewarding to do so—or at least, more rewarding than alternative thoughts. Sometimes a thought is rewarding because it involves feeling good now, sometimes it’s rewarding because it involves an expectation of feeling good in the distant future, sometimes it’s rewarding because it involves an expectation that it will make your beloved friend feel good, sometimes it’s rewarding because it involves an expectation that it will make your admired in-group members very impressed with you, etc.
I think that the thing that gets rewarded is thoughts / plans, not just actions / states. So we don’t have to assume that the Thought Generator is proposing an action that’s unrewarding now (going to the gym) in order to get into a more-rewarding state later on (being ripped). Instead, the Thought Generator can generate one thought right now, “I’m gonna go to the gym so that I can get ripped”. That one thought can be rewarding right now, because the “…so that I can get ripped” is right there in the thought, providing evidence to the brainstem that the thought should be rewarded, and that evidence can plausibly outweigh the countervailing evidence from the “I’m gonna go to the gym…” part of the thought.
I do think there’s still an adjustable parameter in the brain related to time-discounting, even if the details are kinda different than in normal RL. But I don’t see a strong connection between that and social instincts. For example, if you abstain from ice cream to avoid a stomach ache, that’s a time-discounting thing, but it’s not a social-instincts thing. It’s possible that social animals in general are genetically wired to time-discount less than non-social animals, but I don’t have any particular reason to expect that to be the case. Or, maybe humans in particular are genetically wired to time-discount less than other animals, I don’t know, but if that’s true, I still wouldn’t expect that it has to do with humans being social; rather I would assume that it evolved because humans are smarter, and therefore human plans are unusually likely to work out as predicted, compared to other animals.
I think social instincts come from having things in the innate reward function that track “having high status in my in-group” and “being treated fairly” and “getting revenge” and so on. (To a first approximation.) Post #13(ish) will be a (hopefully) improved and updated version of this discussion of how such things might get actually incorporated into the reward function, given the difficulties related to symbol-grounding. You might also be interested in my post (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations).
Hope this helps, happy to talk more, either here or by phone if you think that would help. :)