[this is a draft that I shared with a bunch of friends a while ago; they raised many issues that I haven’t addressed, but might address at some point in the future]
In my opinion, and AFAICT the opinion of many alignment researchers, there are problems with aligning superintelligent models that no alignment techniques so far proposed are able to fix. Even if we had a full kitchen sink approach where we’d overcome all the practical challenges of applying amplification techniques, transparency techniques, adversarial training, and so on, I still wouldn’t feel that confident that we’d be able to build superintelligent systems that were competitive with unaligned ones, unless we got really lucky with some empirical contingencies that we will have no way of checking except for just training the superintelligence and hoping for the best.
Two examples:
A simplified version of the hope with IDA is that we’ll be able to have our system make decisions in a way that never had to rely on searching over uninterpretable spaces of cognitive policies. But this will only be competitive if IDA can do all the same cognitive actions that an unaligned system can do, which is probably false, eg cf Inaccessible Information.
The best we could possibly hope for with transparency techniques is: For anything that a neural net is doing, we are able to get the best possible human understandable explanation of what it’s doing, and what we’d have to change in the neural net to make it do something different. But this doesn’t help us if the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand, because they’re too complicated or alien. It seems likely to me that these concepts exist. And so systems will be much weaker if we demand interpretability.
Even though these techniques are fundamentally limited, I think there are still several arguments in favor of sorting out the practical details of how to implement them:
Perhaps we actually should be working on solving the alignment problem for non-arbitrarily powerful systems
Maybe because we only need to align slightly superhuman systems who we can hand off alignment work to. (I think that this relies on assumptions about gradual development of AGI and some other assumptions.)
Maybe because narrow AI will be transformative before general AI, and even though narrow AI doesn’t pose an x-risk from power-seeking, it would still be nice to be able to align it so that we can apply it to a wider variety of tasks (which I think makes it less of a scary technological development in expectation). (Note that this argument for working on alignment is quite different from the traditional arguments.)
Perhaps these fundamentally limited alignment strategies work on arbitrarily powerful systems in practice, because the concepts that our neural nets learn, or the structures they organize their computations into, are extremely convenient for our purposes. (I called this “empirical generalization” in my other doc; maybe I should have more generally called it “empirical contingencies work out nicely”)
These fundamentally limited alignment strategies might be ingredients in better alignment strategies. For example, many different alignment strategies require transparency techniques, and it’s not crazy to imagine that if we come up with some brilliant theoretically motivated alignment schemes, these schemes will still need something like transparency, and so the research we do now will be crucial for the overall success of our schemes later.
The story for this being false is something like “later on, we’ll invent a beautiful, theoretically motivated alignment scheme that solves all the problems these techniques were solving as a special case of solving the overall problem, and so research on how to solve these subproblems was wasted.” As an analogy, think of how a lot of research in computer vision or NLP seems kind of wasted now that we have modern deep learning.
The practical lessons we learn might also apply to better alignment strategies. For example, reinforcement learning from human feedback obviously doesn’t solve the whole alignment problem. But it’s also clearly a stepping stone towards being able to do more amplification-like things where your human judges are aided by a model.
More indirectly, the organizational and individual capabilities we develop as a result of doing this research seems very plausibly helpful for doing the actually good research. Like, I don’t know what exactly it will involve, but it feels pretty likely that it will involve doing ML research, and arguing about alignment strategies in google docs, and having large and well-coordinated teams of researchers, and so on. I don’t think it’s healthy to entirely pursue learning value (I think you get much more of the learning value if you’re really trying to actually do something useful) but I think it’s worth taking into consideration.
But isn’t it a higher priority to try to propose better approaches? I think this depends on empirical questions and comparative advantage. If we want good outcomes, we both need to have good approaches and we need to know how to make them work in practice. Lacking either of these leads to failure. It currently seems pretty plausible to me that on the margin, at least I personally should be trying to scale the applied research while we wait for our theory-focused colleagues to figure out the better ideas. (Part of this is because I think it’s reasonably likely that the theory researchers will make a bunch of progress over the next year or two. Also, I think it’s pretty likely that most of the work required is going to be applied rather than theoretical.)
I think that research on these insufficient strategies is useful. But I think it’s also quite important for people to remember that they’re insufficient, and that they don’t suffice to solve the whole problem on their own. I think that people who research them often equivocate between “this is useful research that will plausibly be really helpful for alignment” and “this strategy might work for aligning weak intelligent systems, but we can see in advance that it might have flaws that only arise when you try to use it to align sufficiently powerful systems and that might not be empirically observable in advance”. (A lot of this equivocation is probably because they outright disagree with me on the truth of the second statement.)
I wonder what you mean by “competitive”? Let’s talk about the “alignment tax” framing. One extreme is that we can find a way such that there is no tradeoff whatsoever between safety and capabilities—an “alignment tax” of 0%. The other extreme is an alignment tax of 100%—we know how to make unsafe AGIs but we don’t know how to make safe AGIs. (Or more specifically, there are plans / ideas that an unsafe AI could come up with and execute, and a safe AI can’t, not even with extra time/money/compute/whatever.)
I’ve been resigned to the idea that an alignment tax of 0% is a pipe dream—that’s just way too much to hope for, for various seemingly-fundamental reasons like humans-in-the-loop being more slow and expensive than humans-out-of-the-loop (more discussion here). But we still want to minimize the alignment tax, and we definitely want to avoid the alignment tax being 100%. (And meanwhile, independently, we try to tackle the non-technical problem of ensuring that all the relevant players are always paying the alignment tax.)
I feel like your post makes more sense to me when I replace the word “competitive” with something like “arbitrarily capable” everywhere (or “sufficiently capable” in the bootstrapping approach where we hand off AI alignment research to the early AGIs). I think that’s what you have in mind?—that you’re worried these techniques will just hit a capabilities wall, and beyond that the alignment tax shoots all the way to 100%. Is that fair? Or do you see an alignment tax of even 1% as an “insufficient strategy”?
I appreciate your points, and I don’t think I see significant points of disagreement. But in terms of emphasis, it seems concerning to be putting effort into (what seems like) rationalizing not updating that a given approach doesn’t have a hope of working. (Or maybe more accurately, that a given approach won’t lead to a sufficient understanding that we could know it would work, which (with further argument) implies that it will not work.) Like, I guess I want to amplify your point
> But I think it’s also quite important for people to remember that they’re insufficient, and that they don’t suffice to solve the whole problem on their own.
and say further that one’s stance to the benefit of working on things with clearer metrics of success, would hopefully include continuously noticing everyone else’s stance to that situation. If a given unit of effort can only be directed towards marginal things, then we could ask (for example): What would it look like to make cumulative marginal progress towards, say, improving our ability to propose better approaches, rather than marginal progress on approaches that we know won’t resolve the key issues?
The best we could possibly hope for with transparency techniques is: For anything that a neural net is doing, we are able to get the best possible human understandable explanation of what it’s doing, and what we’d have to change in the neural net to make it do something different. But this doesn’t help us if the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand, because they’re too complicated or alien. It seems likely to me that these concepts exist. And so systems will be much weaker if we demand interpretability.
That may be ‘the best we could hope for’, but I’m more worried about ‘we can’t understand the neural net (with the tools we have)’ than “the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand”. (Or, solving the task requires concepts that are really complicated to understand (though maybe easy for humans to understand), and so the neural network doesn’t get it.)
And so systems will be much weaker if we demand interpretability.
Whether or not “empirical contingencies work out nicely”, I think the concern about ’fundamentally impossible to understand concepts” is...something that won’t show up in every domain. (I also think that things do exist that people can understand, but it takes a lot of work, so people don’t do it. There’s an example from math involving some obscure theorems that aren’t used a lot for that reason.)
Potentially people could have the cost function of an AI’s model have include its ease of interpretation by humans a factor. Having people manually check every change in a model for its effect on interperability would be too slow, but an AI could still periodically check its current best model with humans and learn a different one if it’s too hard to interpret.
I’ve seen a lot of mention of the importance of safe AI being competitive with non-safe AI. And I’m wondering what would happen if the government just illegalized or heavily taxed the use of the unsafe AI techniques. Then even with significant capability increases, it wouldn’t be worthwhile to use them.
Is there something very doubtful about governments creating such a regulation? I mean, I’ve already heard some people high in the government concerned about AI safety. And the Future of Life institute got the Californian government to unanimously pass the Asilomar AI Principles. It includes things about AI safety, like rigidly controlling any AI that can recursively self-improve.
It sounds extremely dangerous having widespread use of powerful, unaligned AI. So simply to protect their selves and families, they could potentially benefit a lot from implementing such regulations.
A key psychological advantage of the “modest alignment” agenda is that it’s not insanity-inducing. When I seriously contemplate the problem of selecting a utility function to determine the entire universe until the end of time, I want to die (which seems safer and more responsible).
But the problem of making language models “be honest” instead of just continuing the prompt? That’s more my speed; that, I can think about, and possibly even usefully contribute to, without wanting to die. (And if someone else in the future uses honest language models as one of many tools to help select a utility function to determine the entire universe until the end of time, that’s not my problem and not my fault.)
What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?
[this is a draft that I shared with a bunch of friends a while ago; they raised many issues that I haven’t addressed, but might address at some point in the future]
In my opinion, and AFAICT the opinion of many alignment researchers, there are problems with aligning superintelligent models that no alignment techniques so far proposed are able to fix. Even if we had a full kitchen sink approach where we’d overcome all the practical challenges of applying amplification techniques, transparency techniques, adversarial training, and so on, I still wouldn’t feel that confident that we’d be able to build superintelligent systems that were competitive with unaligned ones, unless we got really lucky with some empirical contingencies that we will have no way of checking except for just training the superintelligence and hoping for the best.
Two examples:
A simplified version of the hope with IDA is that we’ll be able to have our system make decisions in a way that never had to rely on searching over uninterpretable spaces of cognitive policies. But this will only be competitive if IDA can do all the same cognitive actions that an unaligned system can do, which is probably false, eg cf Inaccessible Information.
The best we could possibly hope for with transparency techniques is: For anything that a neural net is doing, we are able to get the best possible human understandable explanation of what it’s doing, and what we’d have to change in the neural net to make it do something different. But this doesn’t help us if the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand, because they’re too complicated or alien. It seems likely to me that these concepts exist. And so systems will be much weaker if we demand interpretability.
Even though these techniques are fundamentally limited, I think there are still several arguments in favor of sorting out the practical details of how to implement them:
Perhaps we actually should be working on solving the alignment problem for non-arbitrarily powerful systems
Maybe because we only need to align slightly superhuman systems who we can hand off alignment work to. (I think that this relies on assumptions about gradual development of AGI and some other assumptions.)
Maybe because narrow AI will be transformative before general AI, and even though narrow AI doesn’t pose an x-risk from power-seeking, it would still be nice to be able to align it so that we can apply it to a wider variety of tasks (which I think makes it less of a scary technological development in expectation). (Note that this argument for working on alignment is quite different from the traditional arguments.)
Perhaps these fundamentally limited alignment strategies work on arbitrarily powerful systems in practice, because the concepts that our neural nets learn, or the structures they organize their computations into, are extremely convenient for our purposes. (I called this “empirical generalization” in my other doc; maybe I should have more generally called it “empirical contingencies work out nicely”)
These fundamentally limited alignment strategies might be ingredients in better alignment strategies. For example, many different alignment strategies require transparency techniques, and it’s not crazy to imagine that if we come up with some brilliant theoretically motivated alignment schemes, these schemes will still need something like transparency, and so the research we do now will be crucial for the overall success of our schemes later.
The story for this being false is something like “later on, we’ll invent a beautiful, theoretically motivated alignment scheme that solves all the problems these techniques were solving as a special case of solving the overall problem, and so research on how to solve these subproblems was wasted.” As an analogy, think of how a lot of research in computer vision or NLP seems kind of wasted now that we have modern deep learning.
The practical lessons we learn might also apply to better alignment strategies. For example, reinforcement learning from human feedback obviously doesn’t solve the whole alignment problem. But it’s also clearly a stepping stone towards being able to do more amplification-like things where your human judges are aided by a model.
More indirectly, the organizational and individual capabilities we develop as a result of doing this research seems very plausibly helpful for doing the actually good research. Like, I don’t know what exactly it will involve, but it feels pretty likely that it will involve doing ML research, and arguing about alignment strategies in google docs, and having large and well-coordinated teams of researchers, and so on. I don’t think it’s healthy to entirely pursue learning value (I think you get much more of the learning value if you’re really trying to actually do something useful) but I think it’s worth taking into consideration.
But isn’t it a higher priority to try to propose better approaches? I think this depends on empirical questions and comparative advantage. If we want good outcomes, we both need to have good approaches and we need to know how to make them work in practice. Lacking either of these leads to failure. It currently seems pretty plausible to me that on the margin, at least I personally should be trying to scale the applied research while we wait for our theory-focused colleagues to figure out the better ideas. (Part of this is because I think it’s reasonably likely that the theory researchers will make a bunch of progress over the next year or two. Also, I think it’s pretty likely that most of the work required is going to be applied rather than theoretical.)
I think that research on these insufficient strategies is useful. But I think it’s also quite important for people to remember that they’re insufficient, and that they don’t suffice to solve the whole problem on their own. I think that people who research them often equivocate between “this is useful research that will plausibly be really helpful for alignment” and “this strategy might work for aligning weak intelligent systems, but we can see in advance that it might have flaws that only arise when you try to use it to align sufficiently powerful systems and that might not be empirically observable in advance”. (A lot of this equivocation is probably because they outright disagree with me on the truth of the second statement.)
I wonder what you mean by “competitive”? Let’s talk about the “alignment tax” framing. One extreme is that we can find a way such that there is no tradeoff whatsoever between safety and capabilities—an “alignment tax” of 0%. The other extreme is an alignment tax of 100%—we know how to make unsafe AGIs but we don’t know how to make safe AGIs. (Or more specifically, there are plans / ideas that an unsafe AI could come up with and execute, and a safe AI can’t, not even with extra time/money/compute/whatever.)
I’ve been resigned to the idea that an alignment tax of 0% is a pipe dream—that’s just way too much to hope for, for various seemingly-fundamental reasons like humans-in-the-loop being more slow and expensive than humans-out-of-the-loop (more discussion here). But we still want to minimize the alignment tax, and we definitely want to avoid the alignment tax being 100%. (And meanwhile, independently, we try to tackle the non-technical problem of ensuring that all the relevant players are always paying the alignment tax.)
I feel like your post makes more sense to me when I replace the word “competitive” with something like “arbitrarily capable” everywhere (or “sufficiently capable” in the bootstrapping approach where we hand off AI alignment research to the early AGIs). I think that’s what you have in mind?—that you’re worried these techniques will just hit a capabilities wall, and beyond that the alignment tax shoots all the way to 100%. Is that fair? Or do you see an alignment tax of even 1% as an “insufficient strategy”?
I think was the idea behind ‘oracle ai’s’. (Though I’m aware there were arguments against that approach.)
One of the arguments I didn’t see for
was:
“As we get better at this alignment stuff we will reduce the ‘tradeoff’. (Also, arguably, getting better human feedback improves performance.)
I appreciate your points, and I don’t think I see significant points of disagreement. But in terms of emphasis, it seems concerning to be putting effort into (what seems like) rationalizing not updating that a given approach doesn’t have a hope of working. (Or maybe more accurately, that a given approach won’t lead to a sufficient understanding that we could know it would work, which (with further argument) implies that it will not work.) Like, I guess I want to amplify your point
> But I think it’s also quite important for people to remember that they’re insufficient, and that they don’t suffice to solve the whole problem on their own.
and say further that one’s stance to the benefit of working on things with clearer metrics of success, would hopefully include continuously noticing everyone else’s stance to that situation. If a given unit of effort can only be directed towards marginal things, then we could ask (for example): What would it look like to make cumulative marginal progress towards, say, improving our ability to propose better approaches, rather than marginal progress on approaches that we know won’t resolve the key issues?
That may be ‘the best we could hope for’, but I’m more worried about ‘we can’t understand the neural net (with the tools we have)’ than “the neural net is doing things that rely on concepts that it’s fundamentally impossible for humans to understand”. (Or, solving the task requires concepts that are really complicated to understand (though maybe easy for humans to understand), and so the neural network doesn’t get it.)
Whether or not “empirical contingencies work out nicely”, I think the concern about ’fundamentally impossible to understand concepts” is...something that won’t show up in every domain. (I also think that things do exist that people can understand, but it takes a lot of work, so people don’t do it. There’s an example from math involving some obscure theorems that aren’t used a lot for that reason.)
Potentially people could have the cost function of an AI’s model have include its ease of interpretation by humans a factor. Having people manually check every change in a model for its effect on interperability would be too slow, but an AI could still periodically check its current best model with humans and learn a different one if it’s too hard to interpret.
I’ve seen a lot of mention of the importance of safe AI being competitive with non-safe AI. And I’m wondering what would happen if the government just illegalized or heavily taxed the use of the unsafe AI techniques. Then even with significant capability increases, it wouldn’t be worthwhile to use them.
Is there something very doubtful about governments creating such a regulation? I mean, I’ve already heard some people high in the government concerned about AI safety. And the Future of Life institute got the Californian government to unanimously pass the Asilomar AI Principles. It includes things about AI safety, like rigidly controlling any AI that can recursively self-improve.
It sounds extremely dangerous having widespread use of powerful, unaligned AI. So simply to protect their selves and families, they could potentially benefit a lot from implementing such regulations.
A key psychological advantage of the “modest alignment” agenda is that it’s not insanity-inducing. When I seriously contemplate the problem of selecting a utility function to determine the entire universe until the end of time, I want to die (which seems safer and more responsible).
But the problem of making language models “be honest” instead of just continuing the prompt? That’s more my speed; that, I can think about, and possibly even usefully contribute to, without wanting to die. (And if someone else in the future uses honest language models as one of many tools to help select a utility function to determine the entire universe until the end of time, that’s not my problem and not my fault.)
What’s insanity-inducing about it? (Not suggesting you dip into the insanity-tending state, just wondering if you have speculations from afar.)
The problem statement you gave does seem to have an extreme flavor. I want to distinguish “selecting the utility function” from the more general “real core of the problem”s. The OP was about (the complement of) the set of researchers directions that are in some way aimed directly at resolving core issues in alignment. Which sounds closer to your second paragraph.
If it’s philosophical difficulty that’s insanity-inducing (e.g. “oh my god this is impossible we’re going to die aaaahh”), that’s a broader problem. But if it’s more “I can’t be responsible for making the decision, I’m not equipped to commit the lightcone one way or the other”, that seems orthogonal to some alignment issues. For example, trying to understand what it would look like to follow along an AI’s thoughts is more difficult and philosophically fraught than your framing of engineering honesty, but also doesn’t seem responsibility-paralysis, eh?