Abstract model of human bias
A putative new idea for AI control; index here.
Any suggestions for refining this model are welcome!
Somewhat inspired by the previous post, this is a model of human bias that can be used to test theories that want to compute the “true” human preferences. The basic idea is to formalise the question:
If the AI can make the human give any answer to any question, can it figure out what humans really want?
The AI’s influence
The AI has access to an algorithm , representing the human. It can either interact with or simulate the interaction correctly.
The interaction consists of describing the outcome of choice versus choice , and then asking the human which option is better. The set of possible binary choices is (thus ). The set of descriptions is ; the set of possible descriptions for is .
Then we have the assumption that humans can be manipulated:
Given any description for which prefers to , there exists a description , logically equivalent to , such that prefers to , and vice-versa.
Note that could be a paragraph while could be a ten-volume encyclopedia; all that’s required is that they be logically equivalent.
But manipulating human answers in the immediate sense is not the only way the AI can influence them. Our values can change through interactions, reflection, and even through being given true and honest information, and the AI can influence this:
There is a wide class of algorithms , such that for all , there exists a sequence of descriptions the AI can give to that will transform into .
The grounding assumptions
So far, we’ve just made the task hopeless: the AI can get any answer from , and can make into whatever algorithm it feels like. Saying has preferences is meaningless.
However, we’re building from a human world where the potential for human manipulating humans is limited, and somewhat recognisable. Thus:
There exists a subset (called standard choices) such that, for all , there exists a subset (called standard descriptions) such that is tagged as fair and highly reflective of the true values of .
Basically these are examples of interactions that are agreed to be fair, honest, and informative. The more abstract the choices, the harder it is to be sure of this.
Of course, we’d also allow the AI to learn from examples of negative interactions as well:
There exists a subset such that, for all , there exists a subset such that is tagged as a manipulative interaction with .
Finally, we might want a way to encode human meta-preferences:
Among the descriptions tagged as fair or manipulative, there are some that refer to the process of providing descriptions itself.
Building more assumptions in
This still feels like a bare-bones description, unlikely to converge to anything good. For one, I haven’t even defined what “logically equivalent” means. But that’s the challenge of those constructing solutions to the problem of human preferences. Can they construct sufficiently good and to converge to some sort of “true” values for ? Or, more likely, what extra assumptions and definitions are needed to get such a convergence? And finally, is the result reflective of what we would want?
I think it might be possible to get somewhere with a model of this type if we formalize the idea that manipulation requires considerable optimization power. For example, we can assume that a random description has low probability to be manipulative. Or, consider the following stronger assumption. For any algorithm that takes one description as input and produces another description of the same choice as output, if the computing resources used by the algorithm are sufficiently few then for most inputs it will not produce a manipulative output.
Those are some of the lines I was thinking along. But it’s not clear if the peak of the distribution is close to accuracy, human bias and poor understanding being what they are..
I agree that even without manipulation, human reasoning is wildly inaccurate. But perhaps we can use a model where human reasoning asymptotically converges to something accurate unless subjected to some sort of “destructive manipulation” which is unlikely to happen by chance.
Interesting. How could we formalise that?
The following is one (simplistic) model which might be a useful starting point.
Consider a human and a robot playing a stochastic game like in CIRL. Suppose that each of them is an oracle machine plugged into a reflective oracle, like in the recent paper of Jan, Jessica and Benya. Let the robot have the following prior over the program implemented by the human. The human implements a random program (i.e. a random string of bits for some prefix-free universal Oracle machine) conditional on this program being asymptotically optimal in mean for the class of all robot policies that avoid producing some set of “manipulative action sequences.” Here, “manipulative sequences” can be any set S of action sequences s.t.∑x∈Sn−|x|<ϵ where |x| is the length of the action sequence x, n is the number of possible actions and ϵ is a parameter on which the prior depends.