I might have misunderstood your question. Let me restate how I understood it: In the original post you said...
I would optimize myself to maximize my reward, not whatever current behavior triggers the reward.
I intended to give a counterexample: Here is humanity, and we’re optimizing behaviors which once triggered the original rewarded action (replication) rather than the rewarded action itself.
We didn’t end up “short circuiting” into directly fulfilling the reward, as you had described. We care about “current behavior triggers the reward” such as not hurting each other and so on—in other words, we did precisely what you said you wouldn’t do -
(Also, sorry, I tried to ninja edit everything into a much more concise statement, so the parent comment is different than what you saw now. The conversaiton as a whole still makes sense though.)
We don’t have the ability to directly fulfil the reward center. I think narcotics are the closest we’ve got now and lots of people try to mash that button to the detriment of everything else. I just think it’s a kind of crude button and it doesn’t work as well as the direct ability to fully understand and control your own brain.
I think you may have misunderstood me—there’s a distinction between what evolution rewards and what humans find rewarding. (This is getting hard to talk about because we’re using “reward’ to both describe the process used to steer a self-modifying intelligence in the first place and one of the processes that implements our human intelligence and motivations, and those are two very different things.)
The “rewarded behavior” selected by the original algorithm was directly tied to replication and survival.
Drug-stimulated reward centers fall in the “current behaviors that trigger the reward” category, not the original reward. Even when we self-stimulate our reward centers, the thing that we are stimulating isn’t the thing that evolution directly “rewards”.
Directly fulfilling the originally incentivized behavior isn’t about food and sex—a direct way might, for example, be to insert human genomes into rapidly dividing, tough organisms and create tons and tons of them and send them to every planet they can survive on.
Similarly, an intelligence which arises out of a process set up to incentivize a certain set of behaviors will not necessarily target those incentives directly. It might go on to optimize completely unrelated things that only coincidentally target those values. That’s the whole concern.
If an intelligence arises due to a process which creates things that cause us to press a big red “reward” button, the thing that eventually arises won’t necessarily care about the reward button, won’t necessarily care about the effects of the reward button on its processes, and indeed might completely disregard the reward button and all its downstream effects altogether… in the same way we don’t terminally value spreading our genome at all.
Our neurological reward centers are a second layer of sophisticated incentivizing which emerged from the underlying process of incentivizing fitness.
I think I understood you. What do you think I misunderstood?
Maybe we should quit saying that evolution rewards anything at all. Replication isn’t a reward, it’s just a byproduct of an non-intelligent processes. There was never an “incentive” to reproduce, any more than there is an “incentive” for any physical process. High pressure air moves to low pressure regions, not because there’s an incentive, but because that’s just how physics works. At some point, this non-sentient process accidentally invented a reward system and replication, which is a byproduct not a goal, continued to be a byproduct and not a goal. Of course reward systems that maximized duplication of genes and gene carriers flourished, but today when we have the ability to directly duplicate genes we don’t do it because we were never actually rewarded for that kind of behavior and we generally don’t care too much about duplicating our genes except as it’s tied to actually rewarded stuff like sex, having children, etc.
I might have misunderstood your question. Let me restate how I understood it: In the original post you said...
I intended to give a counterexample: Here is humanity, and we’re optimizing behaviors which once triggered the original rewarded action (replication) rather than the rewarded action itself.
We didn’t end up “short circuiting” into directly fulfilling the reward, as you had described. We care about “current behavior triggers the reward” such as not hurting each other and so on—in other words, we did precisely what you said you wouldn’t do -
(Also, sorry, I tried to ninja edit everything into a much more concise statement, so the parent comment is different than what you saw now. The conversaiton as a whole still makes sense though.)
We don’t have the ability to directly fulfil the reward center. I think narcotics are the closest we’ve got now and lots of people try to mash that button to the detriment of everything else. I just think it’s a kind of crude button and it doesn’t work as well as the direct ability to fully understand and control your own brain.
I think you may have misunderstood me—there’s a distinction between what evolution rewards and what humans find rewarding. (This is getting hard to talk about because we’re using “reward’ to both describe the process used to steer a self-modifying intelligence in the first place and one of the processes that implements our human intelligence and motivations, and those are two very different things.)
The “rewarded behavior” selected by the original algorithm was directly tied to replication and survival.
Drug-stimulated reward centers fall in the “current behaviors that trigger the reward” category, not the original reward. Even when we self-stimulate our reward centers, the thing that we are stimulating isn’t the thing that evolution directly “rewards”.
Directly fulfilling the originally incentivized behavior isn’t about food and sex—a direct way might, for example, be to insert human genomes into rapidly dividing, tough organisms and create tons and tons of them and send them to every planet they can survive on.
Similarly, an intelligence which arises out of a process set up to incentivize a certain set of behaviors will not necessarily target those incentives directly. It might go on to optimize completely unrelated things that only coincidentally target those values. That’s the whole concern.
If an intelligence arises due to a process which creates things that cause us to press a big red “reward” button, the thing that eventually arises won’t necessarily care about the reward button, won’t necessarily care about the effects of the reward button on its processes, and indeed might completely disregard the reward button and all its downstream effects altogether… in the same way we don’t terminally value spreading our genome at all.
Our neurological reward centers are a second layer of sophisticated incentivizing which emerged from the underlying process of incentivizing fitness.
I think I understood you. What do you think I misunderstood?
Maybe we should quit saying that evolution rewards anything at all. Replication isn’t a reward, it’s just a byproduct of an non-intelligent processes. There was never an “incentive” to reproduce, any more than there is an “incentive” for any physical process. High pressure air moves to low pressure regions, not because there’s an incentive, but because that’s just how physics works. At some point, this non-sentient process accidentally invented a reward system and replication, which is a byproduct not a goal, continued to be a byproduct and not a goal. Of course reward systems that maximized duplication of genes and gene carriers flourished, but today when we have the ability to directly duplicate genes we don’t do it because we were never actually rewarded for that kind of behavior and we generally don’t care too much about duplicating our genes except as it’s tied to actually rewarded stuff like sex, having children, etc.