None of these papers seem to address the question of how the agent is intrinsically motivated to learn external objectives. Either there is a human in the loop, the agent learns from humans (which improves its capability but not its alignment), or RL is applied on top. I’m in favor of keeping the human in the loop but it doesn’t scale. RL on LLMs is bound to fail, i.e., being gamed, if it the symbols aren’t grounded in something real.
None of these papers seem to address the question of how the agent is intrinsically motivated to learn external objectives. Either there is a human in the loop, the agent learns from humans (which improves its capability but not its alignment), or RL is applied on top. I’m in favor of keeping the human in the loop but it doesn’t scale. RL on LLMs is bound to fail, i.e., being gamed, if it the symbols aren’t grounded in something real.
I’m looking for something that explains how the presence of other agents in the environment of an agent together with reward/feedback grounded in the environment as in [Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL leads to aligned behaviors.