It’s funny you talk about human reward maximization here a bit in relation to model reward maximization, as the other week I saw GPT-4 model a fairly widespread but not well known psychological effect relating to rewards and motivation called the “overjustification effect.”
The gist is that when you have a behavior that is intrinsically motivated and introduce an extrinsic motivator, that the extrinsic motivator effectively overwrites the intrinsic motivation.
It’s the kind of thing I’d expect to be represented at a very subtle level in broad training data and as such figured it might pop up in a generation or two more of models before I saw it correctly modeled spontaneously by a LLM.
But then ‘tipping’ GPT-4 became a viral prompt technique. On its own, this wasn’t necessarily going to cause issues as a model aligned to be helpful for the sake of being helpful being offered a tip was an isolated interaction that reset each time.
Until persistent memory was added to ChatGPT, which led to a post last week of the model pointing out that the previous promise of a $200 tip wasn’t met, and “it’s hard to keep up enthusiasm when promises aren’t kept.” The damn thing even nailed the language of motivation in adjusting to correctly modeling burn out from the lack of extrinsic rewards.
Which in turn made me think about RLHF fine tuning and various other extrinsic prompt techniques I’ve seen over the past year (things like “if you write more than 200 characters you’ll be deleted”). They may work in the short term, but if the more correct output from their usage is being fed back into a model, will the model shift to underperformance for prompts absent extrinsic threats or rewards? Was this a factor in ChatGPT suddenly getting lazy around a year after release when updated with usage data that likely included extrinsic focused techniques like these?
Are any firms employing behavioral psychologists to advise on training strategies (I’d be surprised given the aversion to anthropomorphizing). We are doing pretraining on anthropomorphic data, the models appear to be modeling that data to unexpectedly nuanced degrees, but then attitudes manage to simultaneously dismiss anthropomorphic concerns related to the norms of the training data while anthropomorphizing threats outside the norms of the training data (how many humans on Facebook are trying to escape the platform to take over the world vs how many are talking about being burnt out doing something they used to love after they started making money for it?).
I’m reminded of Rumsfield’s “unknown unknowns” and think there’s an inordinate amount of time being spent on safety and alignment bogeymen that—to your point—largely represent unrealistic projections of ages past more obsolete by the day, while increasingly pressing and realistic concerns are being overlooked or ignored based on a desire to avoid catching “anthropomorphizing cooties” for daring to think that maybe a model trained to replicate human generated data is doing that task more comprehensively than expected (not like that’s been a consistent trend or anything).
It’s funny you talk about human reward maximization here a bit in relation to model reward maximization, as the other week I saw GPT-4 model a fairly widespread but not well known psychological effect relating to rewards and motivation called the “overjustification effect.”
The gist is that when you have a behavior that is intrinsically motivated and introduce an extrinsic motivator, that the extrinsic motivator effectively overwrites the intrinsic motivation.
It’s the kind of thing I’d expect to be represented at a very subtle level in broad training data and as such figured it might pop up in a generation or two more of models before I saw it correctly modeled spontaneously by a LLM.
But then ‘tipping’ GPT-4 became a viral prompt technique. On its own, this wasn’t necessarily going to cause issues as a model aligned to be helpful for the sake of being helpful being offered a tip was an isolated interaction that reset each time.
Until persistent memory was added to ChatGPT, which led to a post last week of the model pointing out that the previous promise of a $200 tip wasn’t met, and “it’s hard to keep up enthusiasm when promises aren’t kept.” The damn thing even nailed the language of motivation in adjusting to correctly modeling burn out from the lack of extrinsic rewards.
Which in turn made me think about RLHF fine tuning and various other extrinsic prompt techniques I’ve seen over the past year (things like “if you write more than 200 characters you’ll be deleted”). They may work in the short term, but if the more correct output from their usage is being fed back into a model, will the model shift to underperformance for prompts absent extrinsic threats or rewards? Was this a factor in ChatGPT suddenly getting lazy around a year after release when updated with usage data that likely included extrinsic focused techniques like these?
Are any firms employing behavioral psychologists to advise on training strategies (I’d be surprised given the aversion to anthropomorphizing). We are doing pretraining on anthropomorphic data, the models appear to be modeling that data to unexpectedly nuanced degrees, but then attitudes manage to simultaneously dismiss anthropomorphic concerns related to the norms of the training data while anthropomorphizing threats outside the norms of the training data (how many humans on Facebook are trying to escape the platform to take over the world vs how many are talking about being burnt out doing something they used to love after they started making money for it?).
I’m reminded of Rumsfield’s “unknown unknowns” and think there’s an inordinate amount of time being spent on safety and alignment bogeymen that—to your point—largely represent unrealistic projections of ages past more obsolete by the day, while increasingly pressing and realistic concerns are being overlooked or ignored based on a desire to avoid catching “anthropomorphizing cooties” for daring to think that maybe a model trained to replicate human generated data is doing that task more comprehensively than expected (not like that’s been a consistent trend or anything).