I’m not (retroactively in imaginary prehindsight) excited by this problem because neither of the 2 possible answers (3 possible if you count “the same”) had any clear-to-my-model relevance to alignment, or even AGI. AGI will have better OOD generalization on capabilities than current tech, basically by the definition of AGI; and then we’ve got less-clear-to-OpenPhil forces which cause the alignment to generalize more poorly than the capabilities did, which is the Big Problem. Bigger models generalizing better or worse doesn’t say anything obvious to any piece of my model of the Big Problem. Though if larger models start generalizing more poorly, then it takes longer to stupidly-brute-scale to AGI, which I suppose affects timelines some, but that just takes timelines from unpredictable to unpredictable sooo.
If we qualify an experiment as interesting when it can tell anyone about anything, then there’s an infinite variety of experiments “interesting” in this sense and I could generate an unlimited number of them. But I do restrict my interest to experiments which can not only tell me something I don’t know, but tell me something relevant that I don’t know. There is also something to be said for opening your eyes and staring around, but even then, there’s an infinity of blank faraway canvases to stare at, and the trick is to go wandering with your eyes wide open someplace you might see something really interesting. Others will be puzzled and interested by different things and I don’t wish them ill on their spiritual journeys, but I don’t expect the vast majority of them to return bearing enlightenment that I’m at all interested in, though now and then Miles Brundage tweets something (from far outside of EA) that does teach me some little thing about cognition.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Do larger models generalize better or more poorly OOD? It’s a relatively basic question as such things go, and no doubt of interest to many, and may even update our timelines from ‘unpredictable’ to ‘unpredictable’, but… I’m trying to figure out how to say this, and I think I should probably accept that there’s no way to say it that will stop people from trying to sell other bits of research as Super Relevant To Alignment… it’s okay to have an understanding of reality which makes narrower guesses than that about which projects will turn out to be very relevant.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Trying to rephrase it in my own words (which will necessarily lose some details), are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now? Which might tell us for example “what aspect of these predictable problems crop up first, and why?”
are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now?
It potentially sheds light on small subpieces of things that are particular aspects that contribute to the Real Problem, like “What actually went into the nonviolence predicate instead of just nonviolence?” Much of the Real Meta-Problem is that you do not get things analogous to the full Real Problem until you are just about ready to die.
I’m not (retroactively in imaginary prehindsight) excited by this problem because neither of the 2 possible answers (3 possible if you count “the same”) had any clear-to-my-model relevance to alignment, or even AGI. AGI will have better OOD generalization on capabilities than current tech, basically by the definition of AGI; and then we’ve got less-clear-to-OpenPhil forces which cause the alignment to generalize more poorly than the capabilities did, which is the Big Problem. Bigger models generalizing better or worse doesn’t say anything obvious to any piece of my model of the Big Problem. Though if larger models start generalizing more poorly, then it takes longer to stupidly-brute-scale to AGI, which I suppose affects timelines some, but that just takes timelines from unpredictable to unpredictable sooo.
If we qualify an experiment as interesting when it can tell anyone about anything, then there’s an infinite variety of experiments “interesting” in this sense and I could generate an unlimited number of them. But I do restrict my interest to experiments which can not only tell me something I don’t know, but tell me something relevant that I don’t know. There is also something to be said for opening your eyes and staring around, but even then, there’s an infinity of blank faraway canvases to stare at, and the trick is to go wandering with your eyes wide open someplace you might see something really interesting. Others will be puzzled and interested by different things and I don’t wish them ill on their spiritual journeys, but I don’t expect the vast majority of them to return bearing enlightenment that I’m at all interested in, though now and then Miles Brundage tweets something (from far outside of EA) that does teach me some little thing about cognition.
I’m interested at all in Redwood Research’s latest project because it seems to offer a prospect of wandering around with our eyes open asking questions like “Well, what if we try to apply this nonviolence predicate OOD, can we figure out what really went into the ‘nonviolence’ predicate instead of just nonviolence?” or if it works maybe we can try training on corrigibility and see if we can start to manifest the tiniest bit of the predictable breakdowns, which might manifest in some different way.
Do larger models generalize better or more poorly OOD? It’s a relatively basic question as such things go, and no doubt of interest to many, and may even update our timelines from ‘unpredictable’ to ‘unpredictable’, but… I’m trying to figure out how to say this, and I think I should probably accept that there’s no way to say it that will stop people from trying to sell other bits of research as Super Relevant To Alignment… it’s okay to have an understanding of reality which makes narrower guesses than that about which projects will turn out to be very relevant.
Trying to rephrase it in my own words (which will necessarily lose some details), are you interested in Redwood’s research because it might plausibly generate alignment issues and problems that are analogous to the real problem within the safer regime and technology we have now? Which might tell us for example “what aspect of these predictable problems crop up first, and why?”
It potentially sheds light on small subpieces of things that are particular aspects that contribute to the Real Problem, like “What actually went into the nonviolence predicate instead of just nonviolence?” Much of the Real Meta-Problem is that you do not get things analogous to the full Real Problem until you are just about ready to die.