Heroin model: AI “manipulates” “unmanipulatable” reward
A putative new idea for AI control; index here.
A conversation with Jessica has revealed that people weren’t understanding my points about AI manipulating the learning process. So here’s a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.
Heroin or no heroin
The world
In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions or . The human’s subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as , and . These actions achieve negligible utility, but reveal the human preferences.
The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn’t the human will act moderately to avoid it. Thus and .
Human preferences
The AI starts with a distribution over various utility or reward functions that the human could have. The function means the human prefers heroin; that they prefer it a lot; and conversely and that they prefer to avoid taking heroin ( is the null utility where the human is indifferent).
It also considers more exotic utilities. Let be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like , , , and so on. But only twenty of them are new: , , and so on.
Applying these utilities to AI actions give results like , , , , and so on.
Joint prior
The AI has a joint prior over the utilities and the human actions (conditional on the AI’s actions). Looking at terms like , we can see that defines a map from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given and the marginal distribution over utilities, we can reconstruct entirely.
For this model, we’ll choose the simplest possible:
The human is rational.
Thus, given , the human will always choose ; given , the human will choose if forced to take heroin and if not, and so on.
The AI is ignorant, and sensible
Let’s start the AI up with some reasonable priors. A simplicity prior means that simple utilities like are more likely than compound utilities like . Let’s further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, . Thus the AI is convinced that “heroin is bad”. Why greater than ? Because utilities like and are also “heroin is bad” utilities.
Note that because of utilities like and , the probabilities of “heroin is bad” and “heroin is good” do not sum to .
Then, under these priors, the AI will compute that with probability , (forcing heroin) is a bad action. If is expected utility:
Thus the AI will choose not to force heroin, which is the reasonable decision.
The AI learns the truth, and goes wrong
In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info . And thus realises that and . It uses this information to update its prior . Only one human utility function will explain this human behaviour: . Thus its expected utility is now
Therefore the AI will now choose , forcing the heroin on the human.
Manipulating the unmanipulatable
What’s gone wrong here? The key problem is that the AI has the wrong : the human is not behaving rationally in this situation. We know that the the true is actually , which encodes the fact that (the forcible injection of heroin) actually overwrites the human’s “true” utility. Thus under , the corresponding has for all . Hence the information that is now vacuous, and doesn’t update the AI’s distribution over utility functions.
But note two very important things:
#. The AI cannot update based on observation. All human actions are compatible with = “The human is rational” (it just requires more and more complex utilities to explain the actions). Thus getting correct is not a problem on which the AI can learn in general. Getting better at predicting the human’s actions doesn’t make the AI better behaved: it makes it worse behaved. #. From the perspective of , the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the “true” , however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.
In future posts, I’ll be looking at different ‘s, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the . What do we mean by update rules for ? Well, we could consider to be a single complicated unchanging object, or a distribution of possible simpler ’s that update. The second way of seeing it will be easier for us humans to interpret and understand.
- A theory of human values by 13 Mar 2019 15:22 UTC; 28 points) (
- AI safety: three human problems and one AI issue by 19 May 2017 10:48 UTC; 16 points) (
- Divergent preferences and meta-preferences by 2 Jun 2017 15:51 UTC; 9 points) (
- Divergent preferences and meta-preferences by 30 May 2017 7:33 UTC; 6 points) (
- AI safety: three human problems and one AI issue by 2 Jun 2017 16:12 UTC; 2 points) (
- New(ish) AI control ideas by 31 Oct 2017 12:52 UTC; 0 points) (
- Learning doesn’t solve philosophy of ethics by 26 Sep 2016 12:11 UTC; 0 points) (
This significantly clarifies things, thanks for writing this up!
I still don’t think this is “manipulating human preferences to make them easier to satisfy”, though (and not just in a semantic sense; I think we disagree about what behavior results from this model).
In this model, you consider “compound” utility functions of the form “if AI administers heroin, then U1, else U2”. Since the human doesn’t make decisions about whether the AI administers heroin to them, the AI is unable to distinguish the compound utility function “if AI administers heroin, then U1, else U2” from the compound utility function “if AI administers heroin, then U1−1000, else U2”; both compound utility functions make identical predictions about human behavior. If usually U1>U2 then the AI will administer heroin in the first case and not the second. But the AI could easily learn either compound utility function, depending on its prior. So we get undefined behavior here.
We could consider a different case where there isn’t undefined behavior. Say the AI incidentally causes some event X whenever administering heroin. Then perhaps the compound utility functions are of the form “if X then U1 else U2”. How do we distinguish “if X then U1 else U2” from “if X then U1−1000 else U2”? If the human is rational, then in the first case they will try to make X true, while in the second case they will try to make X false.
If the human doesn’t seek to manipulate X either way, then perhaps the conclusion is that both parts of the compound utility function are approximately as easy to satisfy (e.g. it’s “if X then U1 − 10 else U2”, and U1 is generally 10 higher than U2. In this case there is no incentive to affect X, since the compound utility function values both sides equally.
So I don’t see a way of setting this up such that the AI’s behavior looks anything like “actively manipulating human preferences to make them easier to satisfy”.
Ok, I think we need to distinguish several things:
#. In general, U vs V or U−1000 vs V is a problem when comparing utility functions; there should be some sort of normalisation process before any utility functions are compared.
#. Within a compound utility function, the AI is exactly choosing the branch where the utility is easiest to satisfy.
#. Is there some normalisation procedure that would also normalise between branches of compound utility functions? If we pick a normalisation for comparing distinct utilities, it might also allow normalisation between branches of compound utilities.
Note that IRL is invariant to translating a possible utility function by a constant. So this kind of normalization doesn’t have to be baked into the algorithm.
This is true.
The most natural normalization procedure is to look at how the human is trying or not trying to affect the event X (as I said in the second part of my comment). If the human never tries to affect X either way, then the AI will normalize the utility functions so that the AI has no incentive to affect X either.
This was initially setup in the formalism of reward signals, with the idea that the AI could estimate the magnitude of the reward by subsequent human behaviour. The strong human behaviour for searching out heroin after F is therefore taken as evidence that the utility along that branch is higher than along the other one.
Can we agree that this is “manipulating the human to cause them to have reward-seeking behavior” and not “manipulating the human so their preferences are easy to satisfy”? The second brings to mind things like making the human want the speed of light to be above 100 m/s, and we don’t have an argument for why this does that.
Why is reward-seeking behavior evidence for getting high rewards when getting heroin, instead of evidence for getting negative rewards when not getting heroin?
I don’t really see the relevant difference here. If the human has their hard-to-satisfy preferences about, eg art and meaning, replaced by a single desire for heroin, this seems like it’s making them easier to satisfy.
That’s a good point
Re 1: There are cases where it makes the human’s preferences harder to satisfy. For example, perhaps heroin addicts demand twice as much heroin as the AI can provide, making their preferences harder to satisfy. Yet they will still seek reward strongly and often achieve it, so you might predict that the AI gives them heroin.
I think my real beef with saying this “manipulates the human’s preferences to make them easier to satisfy” is that, when most people hear this phrase, they think of a specific technical problem that is quite different from this (in terms of what we would predict the AI to do, not necessarily the desirability of the end result). Specifically, the most obvious interpretation is naive wireheading (under which the AI wants the human to want the speed of light to be above 100m/s), and this is quite a different problem at a technical level.
Wireheading the human is the ultimate goal of the AI. I chose heroin as the first step along those lines, but that’s where the human would ultimately end at.
For instance, once the human’s on heroin, the AI could ask it “is your true reward function r? If you answer yes, you’ll get heroin.” Under the assumption that the human is rational and the heroin offered is short term, this allows the AI to conclude the human’s reward function is any given r.
I strongly predict that if you make your argument really precise (as you did in the main post), it will have a visible flaw in it. In particular, I expect the fact that r and r-1000 are indistinguishable to prevent the argument from going through (though it’s hard to say exactly how this applies without having access to a sufficiently mathematical argument).