I think you’re right. I think I’d even say that RLHF is selecting goals from learned knowledge if the RLHF conditioning hypothesis is false. They’d just be poorly specified and potentially dangerous goals (do things people like in these contexts).
This calls into question my choice of name; perhaps I should’ve called it explicit goal selection or something to contrast it to the goals implied in a set of RLHF training.
But I didn’t consider this carefully because RLHF is not an important part of either my risk model or my proposed path to successful alignment. It’s kind of helpful to have an “aligned” LLM as the core engine of your language model agent, but such a thing will pursue explicitly defined goals (as it (mis)understands them), not just do things the LLM spits out as ideas.
I see language model agents as a far more likely risk model than mesa-optimization from an LLM. We’ll actively make AGI language model agents before dangerous things emerge from LLM training, on my model. I have some ideas on how easy will be to make agents with slightly better LLMs and memory systems, and how to align them with a stack of things including RLHF or other fine-tuning, but centering on well-goals that are carefully checked, and humans loosely in the loop.
Or if you buy a shard-theory-esque picture of RL locking in heuristics, what heuristics can get locked in depends on what’s “natural” to learn first, even when training from scratch.
Both of these hypotheses probably should come with caveats though. (About expected reliability, training time, model-free-ness, etc.)
If you buy the RLHF Conditioning Hypothesis, then selecting goals from learned knowledge is what RL does too.
I think you’re right. I think I’d even say that RLHF is selecting goals from learned knowledge if the RLHF conditioning hypothesis is false. They’d just be poorly specified and potentially dangerous goals (do things people like in these contexts).
This calls into question my choice of name; perhaps I should’ve called it explicit goal selection or something to contrast it to the goals implied in a set of RLHF training.
But I didn’t consider this carefully because RLHF is not an important part of either my risk model or my proposed path to successful alignment. It’s kind of helpful to have an “aligned” LLM as the core engine of your language model agent, but such a thing will pursue explicitly defined goals (as it (mis)understands them), not just do things the LLM spits out as ideas.
I see language model agents as a far more likely risk model than mesa-optimization from an LLM. We’ll actively make AGI language model agents before dangerous things emerge from LLM training, on my model. I have some ideas on how easy will be to make agents with slightly better LLMs and memory systems, and how to align them with a stack of things including RLHF or other fine-tuning, but centering on well-goals that are carefully checked, and humans loosely in the loop.
Or if you buy a shard-theory-esque picture of RL locking in heuristics, what heuristics can get locked in depends on what’s “natural” to learn first, even when training from scratch.
Both of these hypotheses probably should come with caveats though. (About expected reliability, training time, model-free-ness, etc.)