Argue that wireheading, unlike many other reward gaming or reward tampering problems, is unlikely in practice because the model would have to learn to value the actual transistors storing the reward, which seems exceedingly unlikely in any natural environment.
Humans don’t wirehead because reward reinforces the thoughts which the brain’s credit assignment algorithm deems responsible for producing that reward. Reward is not, in practice, that-which-is-maximized—reward is the antecedent-thought-reinforcer, it reinforces that which produced it. And when a person does a rewarding activity, like licking lollipops, they are thinking thoughts about reality (like “there’s a lollipop in front of me” and “I’m picking it up”), and so these are the thoughts which get reinforced. This is why many human values are about latent reality and not about the human’s beliefs about reality or about the activation of the reward system.
It seems that you’re postulating that the human brain’s credit assignment algorithm is so bad that it can’t tell what high-level goals generated a particular action and so would give credit just to thoughts directly related to the current action. That seems plausible for humans, but my guess would be against for advanced AI systems.
Disclaimer: At the time of writing, this has not been endorsed by Evan.
I can give this a go.
Unpacking Evan’s Comment: My read of Evan’s comment (the parent to yours) is that there are a bunch of learned high-level-goals (“strategies”) with varying levels of influence on the tactical choices made, and that a well-functioning end-to-end credit-assignment mechanism would propagate through action selection (“thoughts directly related to the current action” or “tactics”) all the way to strategy creation/selection/weighting. In such a system, strategies which decide tactics which emit actions which receive reward are selected for at the expense of strategies less good at that. Conceivably, strategies aiming directly for reward would produce tactical choices more highly rewarded than strategies not aiming quite so directly.
One way for this not to be how humans work would be if reward did not propagate to the strategies, and they were selected/developed by some other mechanism while reward only honed/selected tactical cognition. (You could imagine that “strategic cognition” is that which chooses bundles of context-dependent tactical policies, and “tactical cognition” is that which implements a given tactic’s choice of actions in response to some context.) This feels to me close to what Evan was suggesting you were saying is the case with humans.
One Vaguely Mechanistic Illustration of a Similar Concept: A similar way for this to be broken in humans, departing just a bit from Evan’s comment, is if the credit assignment algorithm could identify tactical choices with strategies, but not equally reliably across all strategies. As a totally made up concrete and stylized illustration: Consider one evolutionarily-endowed credit-assignment-target: “Feel physically great,” and two strategies: wirehead with drugs (WIRE), or be pro-social (SOCIAL.) Whenever WIRE has control, it emits some tactic like “alone in my room, take the most fun available drug” which takes actions that result in Xw physical pleasure over a day. Whenever SOCIAL has control, it emits some tactic like “alone in my room, abstain from dissociative drugs and instead text my favorite friend” taking actions which result in Xs physical pleasure over a day.
Suppose also that asocial cognitions like “eat this” have poorly wired feed-back channels and the signal is often lost and so triggers credit-assignment only some small fraction of the time. Social cognition is much better wired-up and triggers credit-assignment every time. Whenever credit assignment is triggered, once a day, reward emitted is 1:1 with the amount of physical pleasure experienced that day.
Since WIRE only gets credit a fraction of the time that it’s due, the average reward (over 30 days, say) credited to WIRE is <<Xw∗30. If and only if Xw>>Xs, like if the drug is heroin or your friends are insufficiently fulfilling, WIRE will be reinforced more relative to SOCIAL. Otherwise, even if the drug is somewhat more physically pleasurable than the warm-fuzzies of talking with friends, SOCIAL will be reinforced more relative to WIRE.
Conclusion: I think Evan is saying that he expects advanced reward-based AI systems to have no such impediments by default, even if humans do have something like this in their construction. Such a stylized agent without any signal-dropping would reinforce WIRE over SOCIAL every time that taking the drug was even a tiny bit more physically pleasurable than talking with friends.
Maybe there is an argument that such reward-aimed goals/strategies would not produce the most rewarding actions in many contexts, or for some other reason would not be selected for / found in advanced agents (as Evan suggests in encouraging someone to argue that such goals/strategies require concepts which are unlikely to develop,) but the above might be in the rough vicinity of what Evan was thinking.
REMINDER: At the time of writing, this has not been endorsed by Evan.
Humans don’t wirehead because reward reinforces the thoughts which the brain’s credit assignment algorithm deems responsible for producing that reward. Reward is not, in practice, that-which-is-maximized—reward is the antecedent-thought-reinforcer, it reinforces that which produced it. And when a person does a rewarding activity, like licking lollipops, they are thinking thoughts about reality (like “there’s a lollipop in front of me” and “I’m picking it up”), and so these are the thoughts which get reinforced. This is why many human values are about latent reality and not about the human’s beliefs about reality or about the activation of the reward system.
It seems that you’re postulating that the human brain’s credit assignment algorithm is so bad that it can’t tell what high-level goals generated a particular action and so would give credit just to thoughts directly related to the current action. That seems plausible for humans, but my guess would be against for advanced AI systems.
No, I don’t intend to postulate that. Can you tell me a mechanistic story of how better credit assignment would go, in your worldview?
Disclaimer: At the time of writing, this has not been endorsed by Evan.
I can give this a go.
Unpacking Evan’s Comment:
My read of Evan’s comment (the parent to yours) is that there are a bunch of learned high-level-goals (“strategies”) with varying levels of influence on the tactical choices made, and that a well-functioning end-to-end credit-assignment mechanism would propagate through action selection (“thoughts directly related to the current action” or “tactics”) all the way to strategy creation/selection/weighting. In such a system, strategies which decide tactics which emit actions which receive reward are selected for at the expense of strategies less good at that. Conceivably, strategies aiming directly for reward would produce tactical choices more highly rewarded than strategies not aiming quite so directly.
One way for this not to be how humans work would be if reward did not propagate to the strategies, and they were selected/developed by some other mechanism while reward only honed/selected tactical cognition. (You could imagine that “strategic cognition” is that which chooses bundles of context-dependent tactical policies, and “tactical cognition” is that which implements a given tactic’s choice of actions in response to some context.) This feels to me close to what Evan was suggesting you were saying is the case with humans.
One Vaguely Mechanistic Illustration of a Similar Concept:
A similar way for this to be broken in humans, departing just a bit from Evan’s comment, is if the credit assignment algorithm could identify tactical choices with strategies, but not equally reliably across all strategies. As a totally made up concrete and stylized illustration: Consider one evolutionarily-endowed credit-assignment-target: “Feel physically great,” and two strategies: wirehead with drugs (WIRE), or be pro-social (SOCIAL.) Whenever WIRE has control, it emits some tactic like “alone in my room, take the most fun available drug” which takes actions that result in Xw physical pleasure over a day. Whenever SOCIAL has control, it emits some tactic like “alone in my room, abstain from dissociative drugs and instead text my favorite friend” taking actions which result in Xs physical pleasure over a day.
Suppose also that asocial cognitions like “eat this” have poorly wired feed-back channels and the signal is often lost and so triggers credit-assignment only some small fraction of the time. Social cognition is much better wired-up and triggers credit-assignment every time. Whenever credit assignment is triggered, once a day, reward emitted is 1:1 with the amount of physical pleasure experienced that day.
Since WIRE only gets credit a fraction of the time that it’s due, the average reward (over 30 days, say) credited to WIRE is <<Xw∗30. If and only if Xw>>Xs, like if the drug is heroin or your friends are insufficiently fulfilling, WIRE will be reinforced more relative to SOCIAL. Otherwise, even if the drug is somewhat more physically pleasurable than the warm-fuzzies of talking with friends, SOCIAL will be reinforced more relative to WIRE.
Conclusion:
I think Evan is saying that he expects advanced reward-based AI systems to have no such impediments by default, even if humans do have something like this in their construction. Such a stylized agent without any signal-dropping would reinforce WIRE over SOCIAL every time that taking the drug was even a tiny bit more physically pleasurable than talking with friends.
Maybe there is an argument that such reward-aimed goals/strategies would not produce the most rewarding actions in many contexts, or for some other reason would not be selected for / found in advanced agents (as Evan suggests in encouraging someone to argue that such goals/strategies require concepts which are unlikely to develop,) but the above might be in the rough vicinity of what Evan was thinking.
REMINDER: At the time of writing, this has not been endorsed by Evan.
Thanks for the story! I may comment more on it later.
That seems to imply that humans would continue to wirehead conditional on that they started wireheading.
Yes, I think they indeed would.