I suspect that the underlying mechanism of how humans can be aligned isn’t something that’s particularly useful applied to AI. One explanation for human alignment is that our values are mostly just a rationalization layer on top of the delicate balance of short term reward heuristics that correlate well with long term genetic fitness.
Maybe the real secret sauce isn’t any specific mechanism but rather the fact that that the reward heuristics are finely tuned by eons of evolution to counterbalance each other well enough to form a semi-stable local optimum where individuals are somewhat aligned. The mechanism for alignment is well-understood hardwired neural reward circuitry but in a convoluted system that happens to result in aligned behavior since alignment is a useful survival strategy that’s been selected for.
A rational AI will likely only be aligned with humans if alignment is useful for them strategically, if alignment is an intrinsic value for them, or a consequence or other intrinsic values. The first is only true so long as the humans are more useful to the AI than not, which isn’t likely to last very long. The second is something we’d like to be true for an AI but since alignment isn’t really an intrinsic value for humans (too abstract to have it’s own reward circuitry), using them as a model probably isn’t especially enlightening. The last is where the most hope lies but given an AI will have wildly different sets of reward functions compared to human reward circuits and likely the ability to modify them (compared to our largely hardwired shards), even fully understanding exactly how human alignment works at every level might not help align AI much at all. It’s a bit like studying the trendiest Twitter users or highest karma Redditors hoping understanding them will be useful for designing a curriculum for child psychiatrists. (The domain overlap is rather limited.)
“Might not help” is insufficient reason to abandon study though. The core concept of looking at what we know already exists is indeed a powerful heuristic for narrowing down the hypothesis space, especially if we keep it broad enough to study human alignment with agents of vastly differing power levels (e.g. human-and-foreign-government or human-and-cockroaches) where goals are not well-aligned.
One explanation for human alignment is that our values are mostly just a rationalization layer on top of the delicate balance of short term reward heuristics that correlate well with long term genetic fitness.
What does this mean?
the reward heuristics are finely tuned by eons of evolution to counterbalance each other well enough to form a semi-stable local optimum where individuals are somewhat aligned.
What does this mean? My current guess “Somehow evolution did it with finetuned reward circuitry.” Which seems extremely relevant to alignment.
And how does that extreme finetuning work, since human brain size changed reasonably quickly on an evolutionary timescale (~2million years), since changing brain size → changing inductive biases?
Not to mention the rapidly changing cultural and social situations induced by smarter and smarter conspecifics.
If evolution had to finetune reward circuitry so hard, how come we still end up reliably binding values to e.g. caring about our families, even wildly off-distribution (i.e. ancestral)?
Furthermore, note a prior probability penality on the hypothesis class “There were relatively few ways for alignment to ‘work’ in humans as well as it does today, and the working ways required lots of finetuning.”
I don’t see how to make this story work.
alignment is a useful survival strategy that’s been selected for.
I suspect that the underlying mechanism of how humans can be aligned isn’t something that’s particularly useful applied to AI. One explanation for human alignment is that our values are mostly just a rationalization layer on top of the delicate balance of short term reward heuristics that correlate well with long term genetic fitness.
Maybe the real secret sauce isn’t any specific mechanism but rather the fact that that the reward heuristics are finely tuned by eons of evolution to counterbalance each other well enough to form a semi-stable local optimum where individuals are somewhat aligned. The mechanism for alignment is well-understood hardwired neural reward circuitry but in a convoluted system that happens to result in aligned behavior since alignment is a useful survival strategy that’s been selected for.
A rational AI will likely only be aligned with humans if alignment is useful for them strategically, if alignment is an intrinsic value for them, or a consequence or other intrinsic values. The first is only true so long as the humans are more useful to the AI than not, which isn’t likely to last very long. The second is something we’d like to be true for an AI but since alignment isn’t really an intrinsic value for humans (too abstract to have it’s own reward circuitry), using them as a model probably isn’t especially enlightening. The last is where the most hope lies but given an AI will have wildly different sets of reward functions compared to human reward circuits and likely the ability to modify them (compared to our largely hardwired shards), even fully understanding exactly how human alignment works at every level might not help align AI much at all. It’s a bit like studying the trendiest Twitter users or highest karma Redditors hoping understanding them will be useful for designing a curriculum for child psychiatrists. (The domain overlap is rather limited.)
“Might not help” is insufficient reason to abandon study though. The core concept of looking at what we know already exists is indeed a powerful heuristic for narrowing down the hypothesis space, especially if we keep it broad enough to study human alignment with agents of vastly differing power levels (e.g. human-and-foreign-government or human-and-cockroaches) where goals are not well-aligned.
What does this mean?
What does this mean? My current guess “Somehow evolution did it with finetuned reward circuitry.” Which seems extremely relevant to alignment.
And how does that extreme finetuning work, since human brain size changed reasonably quickly on an evolutionary timescale (~2million years), since changing brain size → changing inductive biases?
Not to mention the rapidly changing cultural and social situations induced by smarter and smarter conspecifics.
If evolution had to finetune reward circuitry so hard, how come we still end up reliably binding values to e.g. caring about our families, even wildly off-distribution (i.e. ancestral)?
Furthermore, note a prior probability penality on the hypothesis class “There were relatively few ways for alignment to ‘work’ in humans as well as it does today, and the working ways required lots of finetuning.”
I don’t see how to make this story work.
“Evolution did it” is not an explanation for the mechanism working, it’s an explanation of how the mechanism might have gotten there.