A framework for thinking about wireheading
[Epistemic status: Writing down in words something that isn’t very complicated, but is still good to have written down.]
A great deal of ink has been spilled over the possibility of a sufficiently intelligent AI noticing that whatever it has been positively reinforced for seems to be *very* strongly correlated with the floating point value stored in its memory labelled “utility function”, and so through some unauthorized mechanism editing this value and defending it from being edited back in some manner hostile to humans. I’ll reason here by analogy with humans, while agreeing that they might not be the best example.
“Headwires” are (given a little thought) not difficult to obtain for humans—heroin is freely available on the black market, and most humans know that, when delivered into the bloodstream, it generates “reward signal”. Yet most have no desire to try it. Why is this?
Asking any human, they will answer something along the lines of ”becoming addicted to heroin will not help me achieve my goals” ( or some proxy for this: spending all your money and becoming homeless is not very helpful in achieving one’s goals for most values of “goals”.) Whatever the effects of heroin, the actual pain and pleasure that the human brains have experienced has led us to become optimizers of very different things, which a state of such poverty is not helpful for.
Reasoning analogously to AI, we would hope that, to avoid this, a superhuman AI trained by some kind of reinforcement learning has the following properties:
While being trained on “human values” (good luck with that!) the AI must not be allowed to hack its own utility function.
Whatever local optima the training process that generated the AI ends up in (perhaps reinforcement learning of some kind) assigns some probability to the AI optimising what we care about.
(most importantly) The AI realizes that trying wireheading will lead it to become an AI which prefers wireheading over aim it currently has, which would be detrimental to this aim.
I think this is an important enough issue that some empirical testing might be needed to shed some light. Item (3) seems to be the most difficult to implement; we in the real world have the benefit of observing the effects of hard drugs on their unfortunate victims and avoiding them ourselves, so a multi-agent environment in which our AI realizes it is in the same situation as other agents looks like a first outline of a way forward here.
Interesting to note is that the nature created “black boxed” reward function for humans, which is not easy to access directly or hack using normal mental processes. More over, it seems that human reward function is dynamically changing by some narrow mind which is independent of human consciousness (emotions). For example, if it find that glucose level is low in blood in increase the reward for food. An third intuition we could get from introspection is that human reward consists of different pleasures, that is, each actions are provided with not one reward value, but many, which effectively prevents simple wireheading and explains why not we all become heroine addicts.
These three things could be used as intuition to create wireheading-protected AI:
1) black boxing of the reward may be via cryptography, so AI knows the reward, but not exactly how it was calculated
2) small independent rule-based AI inside the black box which change the reward according the circumstances and punish attempts to wirehead
3) reward is presented not as a single linear value, but as several numbers, which characterise different aspects of AIs behaviour, like time, quality, safety, side-effects.
I think your description of the human relationship to heroin is just wrong. First of all, lots of people in fact do heroin. Second, heroin generates reward but not necessarily long-term reward; kids are taught in school about addiction, tolerance, and other sorts of bad things that might happen to you in the long run (including social disapproval, which I bet is a much more important reason than you’re modeling) if you do too much heroin.
Video games are to my mind a much clearer example of wireheading in humans, especially the ones furthest in the fake achievement direction, and people indulge in those constantly. Also television and similar.
“Model-Based Utility Functions” (Hibbard 2012) gave a similar intuition:
And proposed:
I think this is anthropomorphizing the AI too much. To the extent that a (current) reinforcement learning system can be said to “have goals”, the goal is to maximize reward, so wireheading actually is furthering its current goal. It might be that in the future the systems we design are more analogous to humans and then such an approach might be useful.