True answers from AI
A putative new idea for AI control; index here.
This post supersedes previous posts (such as this, this, and, partially, this).
This post explains how we can get honest answers from an AI about the expectation of certain utility u, probabilities of specific events, and conditional expectations.
The next post will partially solve one of the main problems with using the approach in general, the problem of very low probabilities.
The setup
Let be any bounded utility function; by rescaling, we can assume it’s bounded between 0 and 1.
Assume we have a smart AI that’s boxed (as in the case of reduced impact AI, the boxing is also to protect the AI from us observing it). It has a single output channel, on which it can emit a message .
A random process (erasure) will erase before any human can observe it. The probability of erasure, , is low (thus , non-erasure, is high). Assume that the box and the setup (including the AI’s internal setup) is such that if happens, humans cannot figure out what M was.
The AI will be given a utility to maximise (ignore the other box with Y coming from it, for the moment).
Honest expectation estimator
Suppose we want to figure out what the expectation of is. Let be this expectation, as estimated by the AI (and presumably unknown to us). Since is bounded by 0 and 1, so must be as well.
Let be the indicator function for : so if happens, and if does.
The AI’s message gets to contain a single number , bounded between 0 and 1 (any other message is taken to be 0.5, say), and the AI is motivated to maximise:
,
where .
First of all, note that every term in is multiplied by . This means the AI will get utility only in worlds where E happens. Thus it will work on the assumption that its message is never read by any humans (and thus that the contents of this message cannot affect human behaviour—in particular, it has no reason to lie or manipulate us through this message, as such manipulations only succeed in worlds where the AI gets 0 utility anyway).
If we assume that is independent of and , then the AI’s expected utility is proportional to
.
But the AI can choose ! Differentiating this expression with respect to gives , which is zero iff . Differentiating a second time gives , showing that is a maximum.
Therefore, in the worlds where happens (the vast majority), we get to see a message that contains , which is a very accurate estimate, according to the AI, of the expected utility of . And the expected utility the AI derives is .
(Note: the entropy-inspired formula might be more usual to use for these purposes, but the formula used here is much easier to work with.)
Internal versus external probabilities
In some past posts, I’ve been designing AIs that used part of their probability/expectation estimator in order to define a pseudo-utility function. I’ve always been uncomfortable about this, as pseudo-utilities are likely to be unstable under self-improvement.
The approach above avoids that problem by having the AI’s utility depend on specific facts about the world, and then gives the AI the ability to determine those specific facts. But those facts are precisely the information we wanted to known. This avoids the need to pseudo-utility, or constructions of the form “the AI wants to accurately convey the content of its probability estimate”.
Probabilities and (conditional) utilities estimation
Now let’s consider conditional utilities. Let be some bit string, of length , generated by some random process (see the image above).
Let be the indicator function for for some specific string . Then we can compute the expectation for , conditional on , by making the AI print containing such as to maximise
.
Just as before, the AI will choose .
Note that if , then is just , so we can figure out the probability of this way too.
Modify the definition of slightly, to
.
Here, the second term has moved out of the brackets. Now the AI will only focus on worlds where and happens, and will output .
Highest and lowest utilities and probabilities
Finally, we can use the methods above to have the AI give us the highest (and lowest) utilities and probabilities. Let the AI’s message contain not only , but as well, and have it maximise:
.
For fixed , the AI will choose . It’s expected reward is therefore . It will thus choose the with the highest , hence the highest . Using instead of will motivate it to instead find the with the lowest .
In fact, if we assume the AI cannot affect the probability of through the message (given ), the different probabilities and utilities are independent, and we can get the AI to emit the message , maximising the utility
.
Then (conditional on ), the AI will output so that, according to its best estimate,
,
,
,
,
The missing pieces
Note that the above algorithm gives the with the highest (or lowest) or . It does not give the with the highest .
If we assume that all output ’s are equally probable, then . But there are issues with that assumption, and other ways of addressing the issue, which I’ll get to in the next post.
Minimizing a loss function like (q−u)2 is how we usually implement supervised learning. (It’s pretty obvious this function is minimized at q=u…)
In plain language, your proposal seems to be: if a learner’s output influences the system they are “predicting,” and you want to interpret their output as a prediction in a straightforward way, then you could hide the learner’s output whenever you gather training data.
Note that this doesn’t let you access the beliefs of any particular learner, just one that is trained to optimize this supervised learning objective. I think the more interesting question is whether we can train a learner to accomplish some other task, and to reveal useful information about its internal state. (For example, to build an agent that simultaneously picks a to maximize u(a), and honestly reports its expectation of u(a).)
u is a utility function, so squaring it doesn’t work the same way as if it was a value (expectation of u^2 not square of expectation of u). That’s why all the expressions are linear in utility (apart from the indicator functions/utilities, where its clear what multiplying by them does). If I could sensibly take non-linear functions of utilities, I wouldn’t have the laborious construction in the next post to find the y’s that maximise or minimise E(u|y).
Corrigibility could work for what you want, by starting with u and substituting in u#.
Another alternative is to have the AI be a vE(u+u#) maximiser, where u# is defined over a specific particular future message M (for which E is also defined). Then the AI acts (roughly) as a u-maximiser, but will output the useful M. I said roughly, because the u# term would cause it to want to learn more about the expectation of u than otherwise, but hopefully this wouldn’t be a huge divergence. (EDIT: that leads to problems after M/E, but we can reset the utility at that point).
A loss function plays the same role as a utility function—i.e., we train the learner to minimize its expected loss.
I don’t really understand your remark about linearity. Concretely, why is −(q−u)2 not an appropriate utility function?
Actually, −(q−u)2 does work, but “by coincidence” and has other negative properties.
Let me explain. First of all, note that things like −(q−u)4 do not work.
To show this: Let u=+2 with probability 1/3, and −1 with probability 2/3 (I’m dropping the 0≤u≤1 for this example, for simplicity). Then E(u)=0 (so the correct q is 0) while E(u3)=2≠0. Then in the expansion of −(q−u)4, you will get 4qu3, which in expectation is not 0. Hence the q1 term in E(−(q−u)4) is non-zero, which means that q=0 cannot be a maximum of this function.
Then why does −(q−u)2 work then? Because it’s −q2+2qu (which is linear in u), minus u2 (non-linear in u, but the AI can’t affect its value, so it’s irrelevant in a boxed setup).
What other “negative properties” might −(q−u)2 have? Suppose we allow the AI to affect the value of u, somehow, by something that is independent of the value of its output q. Then an AI maximising −q2+2qu will always set q=E(u), for a total expectation of E(u)2. Therefore it will also seek to maximise E(u)2, which maximises E(u) if u≥0. So the agent will output the correct q and maximise E(u) simultaneously.
But if it instead tries to maximise −(q−u)2, then it will still pick q=E(u), and gets expected utility of E(u)2−E(u2)=−Var(u). Therefore it will pick actions that minimise the variance of u, irrelevant of expectation.
Even without being able to affect u, this messes up the rest of my setup. In particular, my “pick y and q so that you maximise vy(2qu−q2)” becomes maximising vy(−(q−u)2) and the AI will now select the y that minimises P(Y=y)∗Var(u|Y=y), instead of maximising E(u∧(Y=y)). If ever Var(u|Y=y)=0 or P(Y=y)=0, it will choose those ys.
What do you mean by “boxed”? Do you mean just physically walled off, or also walled off by an adjustment to its utility function, as in the “reduced impact” post?