Yeah, I think GPT2 was a big moment for this for a lot of people around here. Here was me in 2019. Still, I’d push back on some of the things you’d call “easy”—actually doing them (especially doing them in safety-critical situations where it’s not okay to do the equivalent of spelling Teapot as “Tearot”) still requires a lot of alignment research.
One pattern is that interpretability is easy on human-generated data, because of course something that learns the distribution of that data is usually incentivized to learn human-obvious ways of decomposing the data. Interpretability is going to be harder for something like MuZero that learns by self-play and exploration. A problem this presents us with is that interpretability, or just generally being able to point the AI in the right direction, is going to jump in difficulty precisely when the AI is pushing past the human level, because we’re going be bottlenecked on superhuman-yet-still-human-comprehensible training data.
A recent example is that Minerva can do an outstanding job solving advanced high-school math problems. Why can’t we do the same to solve AI alignment problems? Because Minerva was trained on 5+ billion tokens of math papers where people clearly and correctly solve problems. We don’t have 5 billion tokens of people clearly and correctly solving AI alignment problems, so we can’t get a language model to do that for us.
Yeah, I think GPT2 was a big moment for this for a lot of people around here. Here was me in 2019. Still, I’d push back on some of the things you’d call “easy”—actually doing them (especially doing them in safety-critical situations where it’s not okay to do the equivalent of spelling Teapot as “Tearot”) still requires a lot of alignment research.
One pattern is that interpretability is easy on human-generated data, because of course something that learns the distribution of that data is usually incentivized to learn human-obvious ways of decomposing the data. Interpretability is going to be harder for something like MuZero that learns by self-play and exploration. A problem this presents us with is that interpretability, or just generally being able to point the AI in the right direction, is going to jump in difficulty precisely when the AI is pushing past the human level, because we’re going be bottlenecked on superhuman-yet-still-human-comprehensible training data.
A recent example is that Minerva can do an outstanding job solving advanced high-school math problems. Why can’t we do the same to solve AI alignment problems? Because Minerva was trained on 5+ billion tokens of math papers where people clearly and correctly solve problems. We don’t have 5 billion tokens of people clearly and correctly solving AI alignment problems, so we can’t get a language model to do that for us.