Thinking about this more, I guess it would depend on the exact stopping condition in the training process? If during training, we always go to step 5 after a fixed number of rounds, then M will give a prediction of H’s final estimate of the given question after that number of rounds, which may be essentially random (i.e., depends on H’s background beliefs, knowledge, and psychology) if H’s is still far from reflective equilibrium at that point. This would be less bad if H could stay reasonably uncertain (not give an estimate too close to 0 or 1) prior to reaching reflective equilibrium, but that seems hard for most humans to do.
What would happen if we instead use convergence as the stopping condition (and throw out any questions that take more than some fixed or random threshold to converge)? Can we hope that M would be able to extrapolate what we want it to do, and predict H’s reflective equilibrium even for questions that take longer to converge than what it was trained on?
What would happen if we instead use convergence as the stopping condition (and throw out any questions that take more than some fixed or random threshold to converge)? Can we hope that M would be able to extrapolate what we want it to do, and predict H’s reflective equilibrium even for questions that take longer to converge than what it was trained on?
This is definitely the stopping condition that I’m imagining. What the model would actually do, though, if you, at deployment time, give it a question that takes the human longer to converge on than any question it ever saw in training isn’t a question I can really answer, since it’s a question that’s dependent on a bunch of empirical facts about neural networks that we don’t really know.
The closest we can probably get to answering these sorts of generalization questions now is just to liken the neural net prior to a simplicity prior, ask what the simplest model is that would fit the given training data, and then see if we can reason about what the simplest model’s generalization behavior would be (e.g. the same sort of reasoning as in this post). Unfortunately, I think that sort of analysis generally suggests that most of these sorts of training setups would end up giving you a deceptive model, or at least not the intended model.
That being said, in practice, even if in theory you think you get the wrong thing, you might still be able to avoid that outcome if you do something like relaxed adversarial training to steer the training process in the desired direction via an overseer checking the model using transparency tools while you’re training it.
Regardless, the point of this post, and AI safety via market making in general, though, isn’t that I think I have a solution to these sorts of inner-alignment-style tricky generalization problems—rather, it’s that I think AI safety via market making is a good/interesting outer-alignment-style target to push for, and that I think AI safety via market making also has some nice properties (e.g. compatibility with per-step myopia) that potentially make it easier to do inner alignment for (but still quite difficult, as with all other proposals that I know of).
Now, if we just want to evaluate AI safety via market making’s outer alignment, we can just suppose that somehow we do get a model that just produces the answer that H would at convergence, and ask whether that answer is good. And even then I’m not sure—I think that there’s still the potential for debate-style bad equilibria where some bad/incorrect arguments are just more convincing to the human than any good/correct argument, even after the human is exposed to all possible counterarguments. I do think that the market-making equilibrium is probably better than the debate equilibrium, since it isn’t limited to just two sides, but I don’t believe that very strongly.
Mostly, for me, the point of AI safety via market making is just that it’s another way to get a similar sort of result as AI safety via debate, but that it allows you to do it via a mechanism that’s more compatible with myopia.
Thanks for this very clear explanation of your thinking. A couple of followups if you don’t mind.
Unfortunately, I think that sort of analysis generally suggests that most of these sorts of training setups would end up giving you a deceptive model, or at least not the intended model.
Suppose the intended model is to predict H’s estimate at convergence, and the actual model is predicting H’s estimate at round N for some fixed N larger than any convergence time in the training set. Would you call this an “inner alignment failure”, an “outer alignment failure”, or something else (not an alignment failure)?
Putting these theoretical/conceptual questions aside, the reason I started thinking about this is from considering the following scenario. Suppose some humans are faced with a time-sensitive and highly consequential decision, for example, whether to join or support some proposed AI-based governance system (analogous to the 1690 “liberal democracy” question), or a hostile superintelligence is trying to extort all or most of their resources and they have to decide how to respond. It seems that convergence on such questions might take orders of magnitude more time than what M was trained on. What do you think would actually happen if the humans asked their AI advisor to help with a decision like this? (What are some outcomes you think are plausible?)
What’s your general thinking about this kind of AI risk (i.e., where an astronomical amount of potential value is lost because human-AI systems fail to make the right decisions in high-stakes situations that are eventuated by the advent of transformative AI)? Is this something you worry about as an alignment researcher, or do you (for example) think it’s orthogonal to alignment and should be studied in another branch of AI safety / AI risk?
Suppose the intended model is to predict H’s estimate at convergence, and the actual model is predicting H’s estimate at round N for some fixed N larger than any convergence time in the training set. Would you call this an “inner alignment failure”, an “outer alignment failure”, or something else (not an alignment failure)?
I would call that an inner alignment failure, since the model isn’t optimizing for the actual loss function, but I agree that the distinction is murky. (I’m currently working on a new framework that I really wish I could reference here but isn’t quite ready to be public yet.)
It seems that convergence on such questions might take orders of magnitude more time than what M was trained on. What do you think would actually happen if the humans asked their AI advisor to help with a decision like this? (What are some outcomes you think are plausible?)
That’s a hard question to answer, and it really depends on how optimistic you are about generalization. If you just used current methods but scaled up, my guess is you would get deception and it would try to trick you. If we condition on it not being deceptive, I’d guess it was pursuing some weird proxies rather than actually trying to report the human equilibrium after any number of steps. If we condition on it actually trying to report the human equilibrium after some number of steps, though, my guess is that the simplest way to do that isn’t to have some finite cutoff, so I’d guess it’d do something like an expectation over exponentially distributed steps or something.
What’s your general thinking about this kind of AI risk (i.e., where an astronomical amount of potential value is lost because human-AI systems fail to make the right decisions in high-stakes situations that are eventuated by the advent of transformative AI)? Is this something you worry about as an alignment researcher, or do you (for example) think it’s orthogonal to alignment and should be studied in another branch of AI safety / AI risk?
Definitely seems worth thinking about and taking seriously. Some thoughts:
Ideally, I’d like to just avoid making any decisions that lead to lock-in while we’re still figuring things out (e.g. wait to build anything like a sovereign for a long time). Of course, that might not be possible/realistic/etc.
Hopefully, this problem will just be solved as AI systems become more capable—e.g. if you have a way of turning any unaligned benchmark system into a new system that honestly/helpfully reports everything that the unaligned benchmark knows, then as the unaligned benchmark gets better, you should get better at making decisions with the honest/helpful system.
Thinking about this more, I guess it would depend on the exact stopping condition in the training process? If during training, we always go to step 5 after a fixed number of rounds, then M will give a prediction of H’s final estimate of the given question after that number of rounds, which may be essentially random (i.e., depends on H’s background beliefs, knowledge, and psychology) if H’s is still far from reflective equilibrium at that point. This would be less bad if H could stay reasonably uncertain (not give an estimate too close to 0 or 1) prior to reaching reflective equilibrium, but that seems hard for most humans to do.
What would happen if we instead use convergence as the stopping condition (and throw out any questions that take more than some fixed or random threshold to converge)? Can we hope that M would be able to extrapolate what we want it to do, and predict H’s reflective equilibrium even for questions that take longer to converge than what it was trained on?
This is definitely the stopping condition that I’m imagining. What the model would actually do, though, if you, at deployment time, give it a question that takes the human longer to converge on than any question it ever saw in training isn’t a question I can really answer, since it’s a question that’s dependent on a bunch of empirical facts about neural networks that we don’t really know.
The closest we can probably get to answering these sorts of generalization questions now is just to liken the neural net prior to a simplicity prior, ask what the simplest model is that would fit the given training data, and then see if we can reason about what the simplest model’s generalization behavior would be (e.g. the same sort of reasoning as in this post). Unfortunately, I think that sort of analysis generally suggests that most of these sorts of training setups would end up giving you a deceptive model, or at least not the intended model.
That being said, in practice, even if in theory you think you get the wrong thing, you might still be able to avoid that outcome if you do something like relaxed adversarial training to steer the training process in the desired direction via an overseer checking the model using transparency tools while you’re training it.
Regardless, the point of this post, and AI safety via market making in general, though, isn’t that I think I have a solution to these sorts of inner-alignment-style tricky generalization problems—rather, it’s that I think AI safety via market making is a good/interesting outer-alignment-style target to push for, and that I think AI safety via market making also has some nice properties (e.g. compatibility with per-step myopia) that potentially make it easier to do inner alignment for (but still quite difficult, as with all other proposals that I know of).
Now, if we just want to evaluate AI safety via market making’s outer alignment, we can just suppose that somehow we do get a model that just produces the answer that H would at convergence, and ask whether that answer is good. And even then I’m not sure—I think that there’s still the potential for debate-style bad equilibria where some bad/incorrect arguments are just more convincing to the human than any good/correct argument, even after the human is exposed to all possible counterarguments. I do think that the market-making equilibrium is probably better than the debate equilibrium, since it isn’t limited to just two sides, but I don’t believe that very strongly.
Mostly, for me, the point of AI safety via market making is just that it’s another way to get a similar sort of result as AI safety via debate, but that it allows you to do it via a mechanism that’s more compatible with myopia.
Thanks for this very clear explanation of your thinking. A couple of followups if you don’t mind.
Suppose the intended model is to predict H’s estimate at convergence, and the actual model is predicting H’s estimate at round N for some fixed N larger than any convergence time in the training set. Would you call this an “inner alignment failure”, an “outer alignment failure”, or something else (not an alignment failure)?
Putting these theoretical/conceptual questions aside, the reason I started thinking about this is from considering the following scenario. Suppose some humans are faced with a time-sensitive and highly consequential decision, for example, whether to join or support some proposed AI-based governance system (analogous to the 1690 “liberal democracy” question), or a hostile superintelligence is trying to extort all or most of their resources and they have to decide how to respond. It seems that convergence on such questions might take orders of magnitude more time than what M was trained on. What do you think would actually happen if the humans asked their AI advisor to help with a decision like this? (What are some outcomes you think are plausible?)
What’s your general thinking about this kind of AI risk (i.e., where an astronomical amount of potential value is lost because human-AI systems fail to make the right decisions in high-stakes situations that are eventuated by the advent of transformative AI)? Is this something you worry about as an alignment researcher, or do you (for example) think it’s orthogonal to alignment and should be studied in another branch of AI safety / AI risk?
I would call that an inner alignment failure, since the model isn’t optimizing for the actual loss function, but I agree that the distinction is murky. (I’m currently working on a new framework that I really wish I could reference here but isn’t quite ready to be public yet.)
That’s a hard question to answer, and it really depends on how optimistic you are about generalization. If you just used current methods but scaled up, my guess is you would get deception and it would try to trick you. If we condition on it not being deceptive, I’d guess it was pursuing some weird proxies rather than actually trying to report the human equilibrium after any number of steps. If we condition on it actually trying to report the human equilibrium after some number of steps, though, my guess is that the simplest way to do that isn’t to have some finite cutoff, so I’d guess it’d do something like an expectation over exponentially distributed steps or something.
Definitely seems worth thinking about and taking seriously. Some thoughts:
Ideally, I’d like to just avoid making any decisions that lead to lock-in while we’re still figuring things out (e.g. wait to build anything like a sovereign for a long time). Of course, that might not be possible/realistic/etc.
Hopefully, this problem will just be solved as AI systems become more capable—e.g. if you have a way of turning any unaligned benchmark system into a new system that honestly/helpfully reports everything that the unaligned benchmark knows, then as the unaligned benchmark gets better, you should get better at making decisions with the honest/helpful system.