But it’s much safer to deploy AI iteratively; increasing the stakes, time horizons, and autonomy a little bit each time. With this iterative approach to deployment, you only need to generalize a little bit out of distribution. Further, you can use Agent N to help you closely supervise Agent N+1 before giving it any power.
My model of Eliezer claims that there are some capabilities that are ‘smooth’, like “how large a times table you’ve memorized”, and some are ‘lumpy’, like “whether or not you see the axioms behind arithmetic.” While it seems plausible that we can iteratively increase smooth capabilities, it seems much less plausible for lumpy capabilities.
A specific example: if you have a neural network with enough capacity to 1) memorize specific multiplication Q+As and 2) implement a multiplication calculator, my guess is that during training you’ll see a discontinuity in how many pairs of numbers it can successfully multiply.[1] It is not obvious to me whether or not there are relevant capabilities like this that we’ll “find with neural nets” instead of “explicitly programming in”; probably we will just build AlphaZero so that it uses MCTS instead of finding MCTS with gradient descent, for example.
[edit: actually, also I don’t think I get how you’d use a ‘smaller times table’ to oversee a ‘bigger times table’ unless you already knew how arithmetic worked, at which point it’s not obvious why you’re not just writing an arithmetic program.]
That it might be possible to establish an agent’s inner objective when training on easy problems, when the agent isn’t very capable, such that this objective remains stable as the agent becomes more powerful.
IMO this runs into two large classes of problems, both of which I put under the heading ‘ontological collapse’.
First, suppose the agent’s inner objective is internally located: “seek out pleasant tastes.” Then you run into 16 and 17, where you can’t quite be sure what it means by “pleasant tastes”, and you don’t have a great sense of what “pleasant tastes” will extrapolate to at the next level of capabilities. [One running “joke” in EA is that, on some theories of what morality is about, the highest-value universe is one which contains an extremely large number of rat brains on heroin. I think this is the correct extrapolation / maximization of at least one theory which produces good behavior when implemented by humans today, which makes me pretty worried about this sort of extrapolation.]
Second, suppose the agent’s inner objective is externally located: “seek out mom pressing the reward button”. Then you run into 18, which argues that once the agent realizes that the ‘reward button’ is an object in its environment instead of a communication channel between the human and itself, it may optimize for the object instead of ‘being able to hear what the human would freely communicate’ or whatever philosophically complicated variable it is that we care about. [Note that attempts to express this often need multiple patches and still aren’t fixed; “mom approves of you” can be coerced, “mom would freely approve of you” has a trouble where you have some freedom in identifying your concept of ‘mom’ which means you might pick one who happens to approve of you.]
there’s lots of ongoing research and promising ideas for fixing it.
I’m optimistic about this too, but… I want to make sure we’re looking at the same problem, or something? I think my sense is best expressed in Stanovich and West, where they talk about four responses to the presence of systematic human misjudgments. The ‘performance error’ response is basically the ‘epsilon-rationality’ assumption; 1-ε of the time humans make the right call, and ε of the time they make a random call. While a fine model of performance errors, it doesn’t accurately predict what’s happening with systematic errors, which are predictable instead of stochastic.
I sometimes see people suggest that the model should always or never conform to the human’s systematic errors, but it seems to me like we need to somehow distinguish between systematic “errors” that are ‘value judgments’ (“oh, it’s not that the human prefers 5 deaths to 1 death, it’s that they are opposed to this ‘murder’ thing that I should figure out”) and systematic errors that are ‘bounded rationality’ or ‘developmental levels’ (“oh, it’s not that the (very young) human prefers less water to more water, it’s that they haven’t figured out conservation of mass yet”). It seems pretty sad if we embed all of our confusions into the AI forever—and also pretty sad if we end up not able to transfer any values because all of them look like confusions.[2]
[1] This might depend on what sort of curriculum you train it on; I was imagining something like 1) set the number of digits N=1, 2) generate two numbers uniformly at random between 1 and 2^N, pass them as inputs (sequence of digits?), 3) compare the sequence of digits outputted to the correct answer, either with a binary pass/fail or some sort of continuous similarity metric (so it gets some points for 12x12 = 140 or w/e); once it performs at 90% success check the performance for increased N until you get one with below 80% success and continue training. In that scenario, I think it just memorizes until N is moderately sized (8?), at which point it figures out how to multiply, and then you can increasing N lots without losing accuracy (until you hit some overflow error in its implementation of multiplication from having large numbers).
[2] I’m being a little unfair in using the trolley problem as an example of a value judgment, because in my mind the people who think you shouldn’t pull the lever because it’s murder are confused or missing a developmental jump—but I have the sense that for most value judgments we could find, we can find some coherent position which views it as confused in this way.
Re: smooth vs bumpy capabilities, I agree that capabilities sometimes emerge abruptly and unexpectedly. Still, iterative deployment with gradually increasing stakes is much safer than deploying a model to do something totally unprecedented and high-stakes. There are multiple ways to make deployment more conservative and gradual. (E.g., incrementally increase the amount of work the AI is allowed to do without close supervision, incrementally increase the amount of KL-divergence between the new policy and a known-to-be-safe policy.)
Re: ontological collapse, there are definitely some tricky issues here, but the problem might not be so bad with the current paradigm, where you start with a pretrained model (which doesn’t really have goals and isn’t good at long-horizon control), and fine-tune it (which makes it better at goal-directed behavior). In this case, most of the concepts are learned during the pretraining phase, not the fine-tuning phase where it learns goal-directed behavior.
Still, iterative deployment with gradually increasing stakes is much safer than deploying a model to do something totally unprecedented and high-stakes.
I agree with the “X is safer than Y” claim; I am uncertain whether it’s practically available to us, and much more worried in worlds where it isn’t available.
incrementally increase the amount of KL-divergence between the new policy and a known-to-be-safe policy
For this specific proposal, when I reframe it as “give the system a KL-divergence budget to spend on each change to its policy” I worry that it works against a stochastic attacker but not an optimizing attacker; it may be the case that every known-to-be-safe policy has some unsafe policy within a reasonable KL-divergence of it, because the danger can be localized in changes to some small part of the overall policy-space.
the problem might not be so bad with the current paradigm, where you start with a pretrained model (which doesn’t really have goals and isn’t good at long-horizon control), and fine-tune it (which makes it better at goal-directed behavior). In this case, most of the concepts are learned during the pretraining phase, not the fine-tuning phase where it learns goal-directed behavior.
Yeah, I agree that this seems pretty good. I do naively guess that when you do the fine-tuning, it’s the concepts that are most related to the goals who change the most (as they have the most gradient pressure on them); it’d be nice to know how much this is the case, vs. most of the relevant concepts being durable parts of the environment that were already very important for goal-free prediction.
My model of Eliezer claims that there are some capabilities that are ‘smooth’, like “how large a times table you’ve memorized”, and some are ‘lumpy’, like “whether or not you see the axioms behind arithmetic.” While it seems plausible that we can iteratively increase smooth capabilities, it seems much less plausible for lumpy capabilities.
A specific example: if you have a neural network with enough capacity to 1) memorize specific multiplication Q+As and 2) implement a multiplication calculator, my guess is that during training you’ll see a discontinuity in how many pairs of numbers it can successfully multiply.[1] It is not obvious to me whether or not there are relevant capabilities like this that we’ll “find with neural nets” instead of “explicitly programming in”; probably we will just build AlphaZero so that it uses MCTS instead of finding MCTS with gradient descent, for example.
[edit: actually, also I don’t think I get how you’d use a ‘smaller times table’ to oversee a ‘bigger times table’ unless you already knew how arithmetic worked, at which point it’s not obvious why you’re not just writing an arithmetic program.]
IMO this runs into two large classes of problems, both of which I put under the heading ‘ontological collapse’.
First, suppose the agent’s inner objective is internally located: “seek out pleasant tastes.” Then you run into 16 and 17, where you can’t quite be sure what it means by “pleasant tastes”, and you don’t have a great sense of what “pleasant tastes” will extrapolate to at the next level of capabilities. [One running “joke” in EA is that, on some theories of what morality is about, the highest-value universe is one which contains an extremely large number of rat brains on heroin. I think this is the correct extrapolation / maximization of at least one theory which produces good behavior when implemented by humans today, which makes me pretty worried about this sort of extrapolation.]
Second, suppose the agent’s inner objective is externally located: “seek out mom pressing the reward button”. Then you run into 18, which argues that once the agent realizes that the ‘reward button’ is an object in its environment instead of a communication channel between the human and itself, it may optimize for the object instead of ‘being able to hear what the human would freely communicate’ or whatever philosophically complicated variable it is that we care about. [Note that attempts to express this often need multiple patches and still aren’t fixed; “mom approves of you” can be coerced, “mom would freely approve of you” has a trouble where you have some freedom in identifying your concept of ‘mom’ which means you might pick one who happens to approve of you.]
I’m optimistic about this too, but… I want to make sure we’re looking at the same problem, or something? I think my sense is best expressed in Stanovich and West, where they talk about four responses to the presence of systematic human misjudgments. The ‘performance error’ response is basically the ‘epsilon-rationality’ assumption; 1-ε of the time humans make the right call, and ε of the time they make a random call. While a fine model of performance errors, it doesn’t accurately predict what’s happening with systematic errors, which are predictable instead of stochastic.
I sometimes see people suggest that the model should always or never conform to the human’s systematic errors, but it seems to me like we need to somehow distinguish between systematic “errors” that are ‘value judgments’ (“oh, it’s not that the human prefers 5 deaths to 1 death, it’s that they are opposed to this ‘murder’ thing that I should figure out”) and systematic errors that are ‘bounded rationality’ or ‘developmental levels’ (“oh, it’s not that the (very young) human prefers less water to more water, it’s that they haven’t figured out conservation of mass yet”). It seems pretty sad if we embed all of our confusions into the AI forever—and also pretty sad if we end up not able to transfer any values because all of them look like confusions.[2]
[1] This might depend on what sort of curriculum you train it on; I was imagining something like 1) set the number of digits N=1, 2) generate two numbers uniformly at random between 1 and 2^N, pass them as inputs (sequence of digits?), 3) compare the sequence of digits outputted to the correct answer, either with a binary pass/fail or some sort of continuous similarity metric (so it gets some points for 12x12 = 140 or w/e); once it performs at 90% success check the performance for increased N until you get one with below 80% success and continue training. In that scenario, I think it just memorizes until N is moderately sized (8?), at which point it figures out how to multiply, and then you can increasing N lots without losing accuracy (until you hit some overflow error in its implementation of multiplication from having large numbers).
[2] I’m being a little unfair in using the trolley problem as an example of a value judgment, because in my mind the people who think you shouldn’t pull the lever because it’s murder are confused or missing a developmental jump—but I have the sense that for most value judgments we could find, we can find some coherent position which views it as confused in this way.
Re: smooth vs bumpy capabilities, I agree that capabilities sometimes emerge abruptly and unexpectedly. Still, iterative deployment with gradually increasing stakes is much safer than deploying a model to do something totally unprecedented and high-stakes. There are multiple ways to make deployment more conservative and gradual. (E.g., incrementally increase the amount of work the AI is allowed to do without close supervision, incrementally increase the amount of KL-divergence between the new policy and a known-to-be-safe policy.)
Re: ontological collapse, there are definitely some tricky issues here, but the problem might not be so bad with the current paradigm, where you start with a pretrained model (which doesn’t really have goals and isn’t good at long-horizon control), and fine-tune it (which makes it better at goal-directed behavior). In this case, most of the concepts are learned during the pretraining phase, not the fine-tuning phase where it learns goal-directed behavior.
I agree with the “X is safer than Y” claim; I am uncertain whether it’s practically available to us, and much more worried in worlds where it isn’t available.
For this specific proposal, when I reframe it as “give the system a KL-divergence budget to spend on each change to its policy” I worry that it works against a stochastic attacker but not an optimizing attacker; it may be the case that every known-to-be-safe policy has some unsafe policy within a reasonable KL-divergence of it, because the danger can be localized in changes to some small part of the overall policy-space.
Yeah, I agree that this seems pretty good. I do naively guess that when you do the fine-tuning, it’s the concepts that are most related to the goals who change the most (as they have the most gradient pressure on them); it’d be nice to know how much this is the case, vs. most of the relevant concepts being durable parts of the environment that were already very important for goal-free prediction.