My current position is that this is the wrong question to be asking—instead, I think the right question is just “what is GPT-3′s training story?” Then, we can just talk about to what extent the training rationale is enough to convince us that we would get the desired training goal vs. some other model, like a deceptive model, instead—rather than having to worry about what technically counts as the base objective, mesa-objective, etc.
I was wondering if that was the case, haha. Thanks!
This is unfortunate, no? The AI safety community had this whole thing going with mesa-optimization and whatnot… now you propose to abandon the terminology and shift to this new frame? But what about all the people using the old terminology? Is the old terminology unsalvageable?
I do like your new thing and it seems better to me in some ways, but worse in others. I feel like I expect a failure mode where people exploit ambiguity and norm-laden concepts to convince themselves of happy fairy tales. I should think more about this and write a comment.
ETA: Here’s an attempt to salvage the original inner/outer alignment problem framing:
We admit up front that it’s a bit ambiguous what the base objective is, and thus there will be cases where it’s ambiguous whether a mesa-optimizer is aligned to the base objective.
However, we say this isn’t a big deal. We give a handful of examples of “reasonable construals” of the base objective, like I did in the OP, and say that all the classic arguments are arguments for the plausibility of cases where a mesa-optimizer is misaligned with every reasonable construal of the base objective.
Moreover, we make lemons out of lemonade, and point out that the fact there are multiple reasonable construals is itself reason to think inner alignment problems are serious and severe. I’m imagining an interlocutor who thinks “bah, it hasn’t been established yet that inner-alignment problems are even a thing; it still seems like the default hypothesis is that you get what you train for, i.e. you get an agent that is trying to maximize predictive accuracy or whatever.” And then we say “Oh? What exactly is it trying to maximize? Predictive accuracy full stop? Or predictive accuracy conditional on dataset D? Or is it instead trying to maximize reward, in which case it’d hack its reward channel if it could? Whichever one you think it is, would you not agree that it’s plausible that it might instead end up trying to maximize one of the other ones?”
This is unfortunate, no? The AI safety community had this whole thing going with mesa-optimization and whatnot… now you propose to abandon the terminology and shift to this new frame? But what about all the people using the old terminology? Is the old terminology unsalvageable?
To be clear, that’s definitely not what I’m arguing. I continue to think that the Risks from Learned Optimization terminology is really good, for the specific case that it’s talking about. The problem is just that it’s not general enough to handle all possible ways of training a model using machine learning. Terms like base objective or inner/outer alignment are still great terms for talking about training stories that are trying to train a model to optimize for some specified objective. From “How do we become confident in the safety of a machine learning system?”:
The point of training stories is not to do away with concepts like mesa-optimization, inner alignment, or objective misgeneralization. Rather, the point of training stories is to provide a universal framework in which all of those sorts of concepts can live as discrete subproblems—specific ways in which a training story might go wrong.
I continue to think that the Risks from Learned Optimization terminology is really good, for the specific case that it’s talking about. The problem is just that it’s not general enough to handle all possible ways of training a model using machine learning.
GPT-3 was trained using supervised learning, which I would have thought was a pretty standard way of training a model using machine learning. What training scenarios do you think the Risks from Learned Optimization terminology can handle, and what’s the difference between those and the way GPT-3 was trained?
First, the problem is only with outer/inner alignment—the concept of unintended mesa-optimization is still quite relevant and works just fine.
Second, the problems with applying Risks from Learned Optimization terminology to GPT-3 have nothing to do with the training scenario, the fact that you’re doing unsupervised learning, etc.
The place where I think you run into problems is that, for cases where mesa-optimization is intended in GPT-style training setups, inner alignment in the Risks from Learned Optimization sense is usually not the goal. Most of the optimism about large language models is hoping that they’ll learn to generalize in particular ways that are better than just learning to optimize for something like cross entropy/predictive accuracy. Thus, just saying “if the model is an optimizer, it won’t just learn to optimize for cross entropy/predictive accuracy/whatever else it was trained on,” while true, is unhelpful.
What I like about training stories is that it explicitly asks what sort of model you want to get—rather than assuming that you want something which is optimizing for your training objective—and then asks how likely we are to actually get it (as opposed to some sort of mesa-optimizer, a deceptive model, or anything else).
I feel like I expect a failure mode where people exploit ambiguity and norm-laden concepts to convince themselves of happy fairy tales. I should think more about this and write a comment.
Just wanted to point out that this is already something we need to worry about all the time in alignment. Calling them training stories doesn’t create such failure mode, it makes them obvious to people like you and me who are wary of narrative explanations in science.
Yes. I have the intuition that training stories will make this problem worse. But I don’t think my intuition on this matter is trustworthy (what experience do I have to base it on?) so don’t worry about it. We’ll try it and see what happens.
(to explain the intuition a little bit: With inner/outer alignment, any would-be AGI creator will have to face up to the fact that they haven’t solved outer alignment, because it’ll be easy for a philosopher to find differences between the base objective they’ve programmed and True Human Values. With training stories, I expect lots of people to be saying more sophisticated versions of “It just does what I meant it to do, no funny business.”)
Yeah, agreed. It’s true that GPT obeys the objective “minimize the cross-entropy loss between the output and the distribution of continuations in the training data.” But this doesn’t mean it doesn’talso obey objectives like “write coherent text”, to the extent that we can tell a useful story about how the training set induces that behavior.
(It is amusing to me how our thoughts immediately both jumped to our recent hobbyhorses.)
My current position is that this is the wrong question to be asking—instead, I think the right question is just “what is GPT-3′s training story?” Then, we can just talk about to what extent the training rationale is enough to convince us that we would get the desired training goal vs. some other model, like a deceptive model, instead—rather than having to worry about what technically counts as the base objective, mesa-objective, etc.
I was wondering if that was the case, haha. Thanks!
This is unfortunate, no? The AI safety community had this whole thing going with mesa-optimization and whatnot… now you propose to abandon the terminology and shift to this new frame? But what about all the people using the old terminology? Is the old terminology unsalvageable?
I do like your new thing and it seems better to me in some ways, but worse in others. I feel like I expect a failure mode where people exploit ambiguity and norm-laden concepts to convince themselves of happy fairy tales. I should think more about this and write a comment.
ETA: Here’s an attempt to salvage the original inner/outer alignment problem framing:
We admit up front that it’s a bit ambiguous what the base objective is, and thus there will be cases where it’s ambiguous whether a mesa-optimizer is aligned to the base objective.
However, we say this isn’t a big deal. We give a handful of examples of “reasonable construals” of the base objective, like I did in the OP, and say that all the classic arguments are arguments for the plausibility of cases where a mesa-optimizer is misaligned with every reasonable construal of the base objective.
Moreover, we make lemons out of lemonade, and point out that the fact there are multiple reasonable construals is itself reason to think inner alignment problems are serious and severe. I’m imagining an interlocutor who thinks “bah, it hasn’t been established yet that inner-alignment problems are even a thing; it still seems like the default hypothesis is that you get what you train for, i.e. you get an agent that is trying to maximize predictive accuracy or whatever.” And then we say “Oh? What exactly is it trying to maximize? Predictive accuracy full stop? Or predictive accuracy conditional on dataset D? Or is it instead trying to maximize reward, in which case it’d hack its reward channel if it could? Whichever one you think it is, would you not agree that it’s plausible that it might instead end up trying to maximize one of the other ones?”
To be clear, that’s definitely not what I’m arguing. I continue to think that the Risks from Learned Optimization terminology is really good, for the specific case that it’s talking about. The problem is just that it’s not general enough to handle all possible ways of training a model using machine learning. Terms like base objective or inner/outer alignment are still great terms for talking about training stories that are trying to train a model to optimize for some specified objective. From “How do we become confident in the safety of a machine learning system?”:
GPT-3 was trained using supervised learning, which I would have thought was a pretty standard way of training a model using machine learning. What training scenarios do you think the Risks from Learned Optimization terminology can handle, and what’s the difference between those and the way GPT-3 was trained?
First, the problem is only with outer/inner alignment—the concept of unintended mesa-optimization is still quite relevant and works just fine.
Second, the problems with applying Risks from Learned Optimization terminology to GPT-3 have nothing to do with the training scenario, the fact that you’re doing unsupervised learning, etc.
The place where I think you run into problems is that, for cases where mesa-optimization is intended in GPT-style training setups, inner alignment in the Risks from Learned Optimization sense is usually not the goal. Most of the optimism about large language models is hoping that they’ll learn to generalize in particular ways that are better than just learning to optimize for something like cross entropy/predictive accuracy. Thus, just saying “if the model is an optimizer, it won’t just learn to optimize for cross entropy/predictive accuracy/whatever else it was trained on,” while true, is unhelpful.
What I like about training stories is that it explicitly asks what sort of model you want to get—rather than assuming that you want something which is optimizing for your training objective—and then asks how likely we are to actually get it (as opposed to some sort of mesa-optimizer, a deceptive model, or anything else).
Just wanted to point out that this is already something we need to worry about all the time in alignment. Calling them training stories doesn’t create such failure mode, it makes them obvious to people like you and me who are wary of narrative explanations in science.
Yes. I have the intuition that training stories will make this problem worse. But I don’t think my intuition on this matter is trustworthy (what experience do I have to base it on?) so don’t worry about it. We’ll try it and see what happens.
(to explain the intuition a little bit: With inner/outer alignment, any would-be AGI creator will have to face up to the fact that they haven’t solved outer alignment, because it’ll be easy for a philosopher to find differences between the base objective they’ve programmed and True Human Values. With training stories, I expect lots of people to be saying more sophisticated versions of “It just does what I meant it to do, no funny business.”)
Yeah, agreed. It’s true that GPT obeys the objective “minimize the cross-entropy loss between the output and the distribution of continuations in the training data.” But this doesn’t mean it doesn’t also obey objectives like “write coherent text”, to the extent that we can tell a useful story about how the training set induces that behavior.
(It is amusing to me how our thoughts immediately both jumped to our recent hobbyhorses.)