Keep the humans safe, and let them deliberate(/mature) however they want.
Maximize option value while the humans figure out what they want.
When the humans figure out what they want, listen to them and do it.
Intuitively this is basically what I expect out of a corrigible AI, but I agree with Eliezer that this seems more realistic as a goal if we can see how it arises from a reasonable utility function.
So what does that utility function look like?
A first pass answer is pretty similar to my proposal from A Formalization of Indirect Normativity: we imagine some humans who actually have the opportunity to deliberate however they want and are able to review all of our AI’s inputs and outputs. After a very long time, they evaluate the AI’s behavior on a scale from [-1, 1], where 0 is the point corresponding to “nothing morally relevant happens,” and that evaluation is the AI’s utility.
The big difference is that I’m now thinking about what would actually happen, in the real world if the humans had the space and security to deliberate rather than formally defining a hypothetical process. I think that is going to end up being both safer and easier to implement, though it introduces its own set of complications.
Our hope is that the policy “keep the humans safe, then listen to them about what to do” is a good strategy for getting a high utility in this game, even if our AI is very unsure about what the humans would ultimately want. Then if our AI is sufficiently competent we can expect it to find a strategy at least this good.
The most important complication is that the AI is no longer isolated from the deliberating humans. We don’t care about what the humans “would have done” if the AI hadn’t been there—we need our AI to keep us safe (e.g. from other AI-empowered actors), we will be trusting our AI not to mess with the process of deliberation, and we will likely be relying on our AI to provide “amenities” to the deliberating humans (filling the same role as the hypercomputer in the old proposal).
Going even further, I’d like to avoid defining values in terms of any kind of counterfactual like “what the humans would have said if they’d stayed safe” because I think those will run into many of the original proposal’s problems.
Instead we’re going to define values in terms of what the humans actually conclude here in the real world. Of course we can’t just say “Values are whatever the human actually concludes” because that will lead our agent to deliberately compromise human deliberation rather than protecting it.
Instead, we are going to add in something like narrow value leaning. Assume the human has some narrow preferences over what happens to them over the next hour. These aren’t necessarily that wise. They don’t understand what’s happening in the “outside world” (e.g. “am I going to be safe five hours from now?” or “is my AI-run company acquiring a lot of money I can use when I figure out what I want?”). But they do assign low value to the human getting hurt, and assign high value to the human feeling safe and succeeding at their local tasks; they assign low value to the human tripping and breaking their neck, and high value to having the AI make them a hamburger if they ask for a hamburger; and so on. These preferences are basically dual to the actual process of deliberation that the human undergoes. There is a lot of subtlety about defining or extracting these local values, but for now I’m going to brush that aside and just ask how to extract the utility function from this whole process.
It’s no good to simply use the local values, because we need our AI to do some lookahead (both to future timesteps when the human wants to remain safe, and to the far future when the human will evaluate how much option value the AI actually secured for them). It’s no good to naively integrate local values over time, because a very low score during a brief period (where the human is killed and replaced by a robot accomplice) cannot be offset by any number of high scores in the future.
Here’s my starting proposal:
We quantify the human’s local preferences by asking “Look at the person you actually became. How happy are you with that person? Quantitatively, how much of your value was lost by replacing yourself with that person?” This gives us a loss on a scale from 0% (perfect idealization, losing nothing) to 100% (where all of the value is gone). Most of the values will be exceptionally small, especially if we look at a short period like an hour.
Eventually once the human becomes wise enough to totally epistemically dominate the original AI, they can assign a score to the AI’s actions. To make life simple for now let’s ignore negative outcomes and just describe value as a scalar from 0% (barren universe) to 100% (all of the universe is used in an optimal way). Or we might use this “final scale” in a different way (e.g. to evaluate the AI’s actions rather than the actually assessing outcomes, assigning high scores to corrigible and efficient behavior and somehow quantifying deviations from that ideal).
The utility is the product of all of these numbers.
I think there are a lot of problems with this method of quantitative aggregation. But I think this direction is promising and I currently expect something along these lines will work.
We quantify the human’s local preferences by asking “Look at the person you actually became. How happy are you with that person? Quantitatively, how much of your value was lost by replacing yourself with that person?” This gives us a loss on a scale from 0% (perfect idealization, losing nothing) to 100% (where all of the value is gone). Most of the values will be exceptionally small, especially if we look at a short period like an hour.
Eventually once the human becomes wise enough to totally epistemically dominate the original AI, they can assign a score to the AI’s actions. To make life simple for now let’s ignore negative outcomes and just describe value as a scalar from 0% (barren universe) to 100% (all of the universe is used in an optimal way). Or we might use this “final scale” in a different way (e.g. to evaluate the AI’s actions rather than the actually assessing outcomes, assigning high scores to corrigible and efficient behavior and somehow quantifying deviations from that ideal).
The utility is the product of all of these numbers.
If I follow correctly, the first step requires the humans to evaluate the output of narrow value learning, until this output becomes good enough to become universal with regard to the original AI and supervise it? I’m not sure I get why the AI wouldn’t be incentivized to temper with the narrow value learning, à la Predict-o-matic? Depending on certain details, (like maybe the indescribable hellworld hypothesis), maybe the AI can introduce changes to the partial imitations/deliberations that end up hidden and compounding until the imitations epistemically dominates the AI, and then it ask it to do simple stuff.
The hope is that a tampering large enough to corrupt the human’s final judgment would get a score of ~0 in the local value learning. 0 is the “right” score since the tampered human by hypothesis has lost all of the actual correlation with value. (Note that at the end you don’t need to “ask it to do simple stuff” you can just directly assign a score of 1.)
This hope does require the local oversight process to be epistemically competitive with the AI, in the sense that e.g. if the AI understands something subtle about the environment dynamics then the oversight process also needs to understand that. And that’s what we are trying to do with all of this business about training AIs to answer questions honestly. The point is just that you don’t have to clear up any of the ambiguity about what the human wants, you just have to be able to detect someone tampering with deliberation. (And the operationalization of tampering doesn’t have to be so complex.)
(I’m not sure if this made too much sense, I have a draft of a related comment that I’ll probably post soon but overall expect to just leave this as not-making-much-sense for now.)
This hope does require the local oversight process to be epistemically competitive with the AI, in the sense that e.g. if the AI understands something subtle about the environment dynamics then the oversight process also needs to understand that. And that’s what we are trying to do with all of this business about training AIs to answer questions honestly. The point is just that you don’t have to clear up any of the ambiguity about what the human wants, you just have to be able to detect someone tampering with deliberation. (And the operationalization of tampering doesn’t have to be so complex.)
So you want a sort of partial universality sufficient to bootstrap the process locally (while not requiring the understanding of our values in fine details), giving us enough time for a deliberation that would epistemically dominate the AI in a global sense (and get our values right)?
If that’s about right, then I agree that having this would make your proposal work, but I still don’t know how to get it. I need to read your previous posts on reading questions honestly.
You basically just need full universality / epistemic competitiveness locally. This is just getting around “what are values?” not the need for competitiveness. Then the global thing is also epistemically competitive, and it is able to talk about e.g. how our values interact with the alien concepts uncovered by our AI (which we want to reserve time for since we don’t have any solution better than “actually figure everything out ‘ourselves’”).
Almost all of the time I’m thinking about how to get epistemic competitiveness for the local interaction. I think that’s the meat of the safety problem.
The upside of humans in reality is that there is no need to figure out
how to make efficient imitations that function correctly (as in X-and-only-X).
To be useful, imitations should be efficient, which exact imitations are not.
Yet for the role of building blocks of alignment machinery,
imitations shouldn’t have important systematic tendencies not found in the originals,
and their absence is only clear for exact imitations
(if not put in very unusual environments).
Suppose you already have an AI that interacts with the world,
protects it from dangerous AIs, and doesn’t misalign people living in it.
Then there’s time to figure out
how to perform X-and-only-X efficient imitation,
which drastically expands the design space,
makes it more plausible that the kinds of systems that you wrote about a lot
relying on imitations actually work as intended.
In particular, this might include the kind of long reflection that has all the advantages of happening in reality
without wasting time and resources on straightforwardly happening in reality,
or letting the bad things that would happen in reality actually happen.
So figuring out object level values doesn’t seem like a priority
if you somehow got to the point of having an opportunity
to figure out efficient imitation.
(While getting to that point without figuring out object level values
doesn’t seem plausible, maybe there’s a suggestion of a process
that gets us there in the limit in here somewhere.)
I think the biggest difference is between actual and hypothetical processes of reflection. I agree that an “actual” process of reflection would likely ultimately involve most humans migrating to emulations for the speed and other advantages. (I am not sure that a hypothetical process necessarily needs efficient imitations, rather than AI reasoning about what actual humans—or hypothetical slow-but-faithful imitations—might do.)
I see getting safe and useful reasoning about exact imitations as a weird special case or maybe a reformulation of X-and-only-X efficient imitation. Anchoring to exact imitations in particular makes accurate prediction more difficult than it needs to be, as it’s not the thing we care about, there are many irrelevant details that influence outcomes that accurate predictions would need to take into account. So a good “prediction” is going to be value-laden, with concrete facts about actual outcomes of setups built out of exact imitations being unimportant, which is about the same as the problem statement of X-and-only-X efficient imitation.
If such “predictions” are not good enough by themselves, underlying actual process of reflection (people living in the world) won’t save/survive this if there’s too much agency guided by the predictions. Using an underlying hypothetical process of reflection (by which I understand running a specific program) is more robust, as AI might go very wrong initially, but will correct itself once it gets around to computing the outcomes of the hypothetical reflection with more precision, provided the hypothetical process of reflection is defined as isolated from the AI.
I’m not sure what difference between hypothetical and actual processes of reflection you are emphasizing (if I understood what the terms mean correctly), since the actual civilization might plausibly move in into a substrate that is more like ML reasoning than concrete computation (let alone concrete physical incarnation), and thus become the same kind of thing as hypothetical reflection. The most striking distinction (for AI safety) seems to be the implication that an actual process of reflection can’t be isolated from decisions of the AI taken based on insufficient reflection.
There’s also the need to at least define exact imitations or better yet X-and-only-X efficient imitation in order to define a hypothetical process of reflection, which is not as absolutely necessary for actual reflection, so getting hypothetical reflection at all might be more difficult than some sort of temporary stability with actual reflection, which can then be used to define hypothetical reflection and thereby guard from consequences of overly agentic use of bad predictions of (on) actual reflection.
It seems to me like “Reason about a perfect emulation of a human” is an extremely similar task to “reason about a human,” to me it does not feel closely related to X-and-only-X efficient imitation. For example, you can make calibrated predictions about what a human would do using vastly less computing power than a human (even using existing techniques), whereas perfect imitation likely requires vastly more computing power.
The point is that in order to be useful, a prediction/reasoning process should contain mesa-optimizers that perform decision making similar in a value-laden way to what the original humans would do. The results of the predictions should be determined by decisions of the people being predicted (or of people sufficiently similar to them), in the free-will-requires-determinism/you-are-part-of-physics sense. The actual cognitive labor of decision making needs to in some way be an aspect of the process of prediction/reasoning, or it’s not going to be good enough. And in order to be safe, these mesa-optimizers shouldn’t be systematically warped into something different (from a value-laden point of view), and there should be no other mesa-optimizers with meaningful influence in there. This just says that prediction/reasoning needs to be X-and-only-X in order to be safe. Thus the equivalence. Prediction of exact imitation in particular is weird because in that case the similarity measure between prediction and exact imitation is hinted to not be value-laden, which it might have to be in order for the prediction to be both X-and-only-X and efficient.
This is only unimportant if X-and-only-X is the likely default outcome of predictive generalization, so that not paying attention to this won’t result in failure, but nobody understands if this is the case.
The mesa-optimizers in the prediction/reasoning similar to the original humans is what I mean by efficient imitations (whether X-and-only-X or not). They are not themselves the predictions of original humans (or of exact imitations), which might well not be present as explicit parts of the design of reasoning about the process of reflection as a whole, instead they are the implicit decision makers that determine what the conclusions of the reasoning say, and they are much more computationally efficient (as aspects of cheaper reasoning) than exact imitations. At the same time, if they are similar enough in a value-laden way to the originals, there is no need for better predictions, much less for exact imitation, the prediction/reasoning is itself the imitation we’d want to use, without any reference to an underlying exact process. (In a story simulation, there are no concrete states of the world, only references to states of knowledge, yet there are mesa-optimizers who are the people inhabiting it.)
If prediction is to be value-laden, with value defined by reflection built out of that same prediction, the only sensible way to set this up seems to be as a fixpoint of an operator that maps (states of knowledge about) values to (states of knowledge about) values-on-reflection computed by making use of the argument values to do value-laden efficient imitation. But if this setup is not performed correctly, then even if it’s set up at all, we are probably going to get bad fixpoints, as it happens with things like bad Nash equilibria etc. And if it is performed correctly, then it might be much more sensible to allow an AI to influence what happens within the process of reflection more directly than merely by making systematic distortions in predicting/reasoning about it, thus hypothetical processes of reflection wouldn’t need the isolation from AI’s agency that normally makes them safer than the actual process of reflection.
In the strategy stealing assumption I describe a policy we might want our AI to follow:
Keep the humans safe, and let them deliberate(/mature) however they want.
Maximize option value while the humans figure out what they want.
When the humans figure out what they want, listen to them and do it.
Intuitively this is basically what I expect out of a corrigible AI, but I agree with Eliezer that this seems more realistic as a goal if we can see how it arises from a reasonable utility function.
So what does that utility function look like?
A first pass answer is pretty similar to my proposal from A Formalization of Indirect Normativity: we imagine some humans who actually have the opportunity to deliberate however they want and are able to review all of our AI’s inputs and outputs. After a very long time, they evaluate the AI’s behavior on a scale from [-1, 1], where 0 is the point corresponding to “nothing morally relevant happens,” and that evaluation is the AI’s utility.
The big difference is that I’m now thinking about what would actually happen, in the real world if the humans had the space and security to deliberate rather than formally defining a hypothetical process. I think that is going to end up being both safer and easier to implement, though it introduces its own set of complications.
Our hope is that the policy “keep the humans safe, then listen to them about what to do” is a good strategy for getting a high utility in this game, even if our AI is very unsure about what the humans would ultimately want. Then if our AI is sufficiently competent we can expect it to find a strategy at least this good.
The most important complication is that the AI is no longer isolated from the deliberating humans. We don’t care about what the humans “would have done” if the AI hadn’t been there—we need our AI to keep us safe (e.g. from other AI-empowered actors), we will be trusting our AI not to mess with the process of deliberation, and we will likely be relying on our AI to provide “amenities” to the deliberating humans (filling the same role as the hypercomputer in the old proposal).
Going even further, I’d like to avoid defining values in terms of any kind of counterfactual like “what the humans would have said if they’d stayed safe” because I think those will run into many of the original proposal’s problems.
Instead we’re going to define values in terms of what the humans actually conclude here in the real world. Of course we can’t just say “Values are whatever the human actually concludes” because that will lead our agent to deliberately compromise human deliberation rather than protecting it.
Instead, we are going to add in something like narrow value leaning. Assume the human has some narrow preferences over what happens to them over the next hour. These aren’t necessarily that wise. They don’t understand what’s happening in the “outside world” (e.g. “am I going to be safe five hours from now?” or “is my AI-run company acquiring a lot of money I can use when I figure out what I want?”). But they do assign low value to the human getting hurt, and assign high value to the human feeling safe and succeeding at their local tasks; they assign low value to the human tripping and breaking their neck, and high value to having the AI make them a hamburger if they ask for a hamburger; and so on. These preferences are basically dual to the actual process of deliberation that the human undergoes. There is a lot of subtlety about defining or extracting these local values, but for now I’m going to brush that aside and just ask how to extract the utility function from this whole process.
It’s no good to simply use the local values, because we need our AI to do some lookahead (both to future timesteps when the human wants to remain safe, and to the far future when the human will evaluate how much option value the AI actually secured for them). It’s no good to naively integrate local values over time, because a very low score during a brief period (where the human is killed and replaced by a robot accomplice) cannot be offset by any number of high scores in the future.
Here’s my starting proposal:
We quantify the human’s local preferences by asking “Look at the person you actually became. How happy are you with that person? Quantitatively, how much of your value was lost by replacing yourself with that person?” This gives us a loss on a scale from 0% (perfect idealization, losing nothing) to 100% (where all of the value is gone). Most of the values will be exceptionally small, especially if we look at a short period like an hour.
Eventually once the human becomes wise enough to totally epistemically dominate the original AI, they can assign a score to the AI’s actions. To make life simple for now let’s ignore negative outcomes and just describe value as a scalar from 0% (barren universe) to 100% (all of the universe is used in an optimal way). Or we might use this “final scale” in a different way (e.g. to evaluate the AI’s actions rather than the actually assessing outcomes, assigning high scores to corrigible and efficient behavior and somehow quantifying deviations from that ideal).
The utility is the product of all of these numbers.
I think there are a lot of problems with this method of quantitative aggregation. But I think this direction is promising and I currently expect something along these lines will work.
If I follow correctly, the first step requires the humans to evaluate the output of narrow value learning, until this output becomes good enough to become universal with regard to the original AI and supervise it? I’m not sure I get why the AI wouldn’t be incentivized to temper with the narrow value learning, à la Predict-o-matic? Depending on certain details, (like maybe the indescribable hellworld hypothesis), maybe the AI can introduce changes to the partial imitations/deliberations that end up hidden and compounding until the imitations epistemically dominates the AI, and then it ask it to do simple stuff.
The hope is that a tampering large enough to corrupt the human’s final judgment would get a score of ~0 in the local value learning. 0 is the “right” score since the tampered human by hypothesis has lost all of the actual correlation with value. (Note that at the end you don’t need to “ask it to do simple stuff” you can just directly assign a score of 1.)
This hope does require the local oversight process to be epistemically competitive with the AI, in the sense that e.g. if the AI understands something subtle about the environment dynamics then the oversight process also needs to understand that. And that’s what we are trying to do with all of this business about training AIs to answer questions honestly. The point is just that you don’t have to clear up any of the ambiguity about what the human wants, you just have to be able to detect someone tampering with deliberation. (And the operationalization of tampering doesn’t have to be so complex.)
(I’m not sure if this made too much sense, I have a draft of a related comment that I’ll probably post soon but overall expect to just leave this as not-making-much-sense for now.)
So you want a sort of partial universality sufficient to bootstrap the process locally (while not requiring the understanding of our values in fine details), giving us enough time for a deliberation that would epistemically dominate the AI in a global sense (and get our values right)?
If that’s about right, then I agree that having this would make your proposal work, but I still don’t know how to get it. I need to read your previous posts on reading questions honestly.
You basically just need full universality / epistemic competitiveness locally. This is just getting around “what are values?” not the need for competitiveness. Then the global thing is also epistemically competitive, and it is able to talk about e.g. how our values interact with the alien concepts uncovered by our AI (which we want to reserve time for since we don’t have any solution better than “actually figure everything out ‘ourselves’”).
Almost all of the time I’m thinking about how to get epistemic competitiveness for the local interaction. I think that’s the meat of the safety problem.
The upside of humans in reality is that there is no need to figure out how to make efficient imitations that function correctly (as in X-and-only-X). To be useful, imitations should be efficient, which exact imitations are not. Yet for the role of building blocks of alignment machinery, imitations shouldn’t have important systematic tendencies not found in the originals, and their absence is only clear for exact imitations (if not put in very unusual environments).
Suppose you already have an AI that interacts with the world, protects it from dangerous AIs, and doesn’t misalign people living in it. Then there’s time to figure out how to perform X-and-only-X efficient imitation, which drastically expands the design space, makes it more plausible that the kinds of systems that you wrote about a lot relying on imitations actually work as intended. In particular, this might include the kind of long reflection that has all the advantages of happening in reality without wasting time and resources on straightforwardly happening in reality, or letting the bad things that would happen in reality actually happen.
So figuring out object level values doesn’t seem like a priority if you somehow got to the point of having an opportunity to figure out efficient imitation. (While getting to that point without figuring out object level values doesn’t seem plausible, maybe there’s a suggestion of a process that gets us there in the limit in here somewhere.)
I think the biggest difference is between actual and hypothetical processes of reflection. I agree that an “actual” process of reflection would likely ultimately involve most humans migrating to emulations for the speed and other advantages. (I am not sure that a hypothetical process necessarily needs efficient imitations, rather than AI reasoning about what actual humans—or hypothetical slow-but-faithful imitations—might do.)
I see getting safe and useful reasoning about exact imitations as a weird special case or maybe a reformulation of X-and-only-X efficient imitation. Anchoring to exact imitations in particular makes accurate prediction more difficult than it needs to be, as it’s not the thing we care about, there are many irrelevant details that influence outcomes that accurate predictions would need to take into account. So a good “prediction” is going to be value-laden, with concrete facts about actual outcomes of setups built out of exact imitations being unimportant, which is about the same as the problem statement of X-and-only-X efficient imitation.
If such “predictions” are not good enough by themselves, underlying actual process of reflection (people living in the world) won’t save/survive this if there’s too much agency guided by the predictions. Using an underlying hypothetical process of reflection (by which I understand running a specific program) is more robust, as AI might go very wrong initially, but will correct itself once it gets around to computing the outcomes of the hypothetical reflection with more precision, provided the hypothetical process of reflection is defined as isolated from the AI.
I’m not sure what difference between hypothetical and actual processes of reflection you are emphasizing (if I understood what the terms mean correctly), since the actual civilization might plausibly move in into a substrate that is more like ML reasoning than concrete computation (let alone concrete physical incarnation), and thus become the same kind of thing as hypothetical reflection. The most striking distinction (for AI safety) seems to be the implication that an actual process of reflection can’t be isolated from decisions of the AI taken based on insufficient reflection.
There’s also the need to at least define exact imitations or better yet X-and-only-X efficient imitation in order to define a hypothetical process of reflection, which is not as absolutely necessary for actual reflection, so getting hypothetical reflection at all might be more difficult than some sort of temporary stability with actual reflection, which can then be used to define hypothetical reflection and thereby guard from consequences of overly agentic use of bad predictions of (on) actual reflection.
It seems to me like “Reason about a perfect emulation of a human” is an extremely similar task to “reason about a human,” to me it does not feel closely related to X-and-only-X efficient imitation. For example, you can make calibrated predictions about what a human would do using vastly less computing power than a human (even using existing techniques), whereas perfect imitation likely requires vastly more computing power.
The point is that in order to be useful, a prediction/reasoning process should contain mesa-optimizers that perform decision making similar in a value-laden way to what the original humans would do. The results of the predictions should be determined by decisions of the people being predicted (or of people sufficiently similar to them), in the free-will-requires-determinism/you-are-part-of-physics sense. The actual cognitive labor of decision making needs to in some way be an aspect of the process of prediction/reasoning, or it’s not going to be good enough. And in order to be safe, these mesa-optimizers shouldn’t be systematically warped into something different (from a value-laden point of view), and there should be no other mesa-optimizers with meaningful influence in there. This just says that prediction/reasoning needs to be X-and-only-X in order to be safe. Thus the equivalence. Prediction of exact imitation in particular is weird because in that case the similarity measure between prediction and exact imitation is hinted to not be value-laden, which it might have to be in order for the prediction to be both X-and-only-X and efficient.
This is only unimportant if X-and-only-X is the likely default outcome of predictive generalization, so that not paying attention to this won’t result in failure, but nobody understands if this is the case.
The mesa-optimizers in the prediction/reasoning similar to the original humans is what I mean by efficient imitations (whether X-and-only-X or not). They are not themselves the predictions of original humans (or of exact imitations), which might well not be present as explicit parts of the design of reasoning about the process of reflection as a whole, instead they are the implicit decision makers that determine what the conclusions of the reasoning say, and they are much more computationally efficient (as aspects of cheaper reasoning) than exact imitations. At the same time, if they are similar enough in a value-laden way to the originals, there is no need for better predictions, much less for exact imitation, the prediction/reasoning is itself the imitation we’d want to use, without any reference to an underlying exact process. (In a story simulation, there are no concrete states of the world, only references to states of knowledge, yet there are mesa-optimizers who are the people inhabiting it.)
If prediction is to be value-laden, with value defined by reflection built out of that same prediction, the only sensible way to set this up seems to be as a fixpoint of an operator that maps (states of knowledge about) values to (states of knowledge about) values-on-reflection computed by making use of the argument values to do value-laden efficient imitation. But if this setup is not performed correctly, then even if it’s set up at all, we are probably going to get bad fixpoints, as it happens with things like bad Nash equilibria etc. And if it is performed correctly, then it might be much more sensible to allow an AI to influence what happens within the process of reflection more directly than merely by making systematic distortions in predicting/reasoning about it, thus hypothetical processes of reflection wouldn’t need the isolation from AI’s agency that normally makes them safer than the actual process of reflection.