One explanation for human alignment is that our values are mostly just a rationalization layer on top of the delicate balance of short term reward heuristics that correlate well with long term genetic fitness.
What does this mean?
the reward heuristics are finely tuned by eons of evolution to counterbalance each other well enough to form a semi-stable local optimum where individuals are somewhat aligned.
What does this mean? My current guess “Somehow evolution did it with finetuned reward circuitry.” Which seems extremely relevant to alignment.
And how does that extreme finetuning work, since human brain size changed reasonably quickly on an evolutionary timescale (~2million years), since changing brain size → changing inductive biases?
Not to mention the rapidly changing cultural and social situations induced by smarter and smarter conspecifics.
If evolution had to finetune reward circuitry so hard, how come we still end up reliably binding values to e.g. caring about our families, even wildly off-distribution (i.e. ancestral)?
Furthermore, note a prior probability penality on the hypothesis class “There were relatively few ways for alignment to ‘work’ in humans as well as it does today, and the working ways required lots of finetuning.”
I don’t see how to make this story work.
alignment is a useful survival strategy that’s been selected for.
What does this mean?
What does this mean? My current guess “Somehow evolution did it with finetuned reward circuitry.” Which seems extremely relevant to alignment.
And how does that extreme finetuning work, since human brain size changed reasonably quickly on an evolutionary timescale (~2million years), since changing brain size → changing inductive biases?
Not to mention the rapidly changing cultural and social situations induced by smarter and smarter conspecifics.
If evolution had to finetune reward circuitry so hard, how come we still end up reliably binding values to e.g. caring about our families, even wildly off-distribution (i.e. ancestral)?
Furthermore, note a prior probability penality on the hypothesis class “There were relatively few ways for alignment to ‘work’ in humans as well as it does today, and the working ways required lots of finetuning.”
I don’t see how to make this story work.
“Evolution did it” is not an explanation for the mechanism working, it’s an explanation of how the mechanism might have gotten there.