You wouldn’t consider the cluster of things which typically fall under morality to be terminal values, which you care about irrespective of your internal mental state?
I don’t consider morality to be a terminal value. I would point out that even a value that I have that I can’t give up right now wouldn’t necessarily be terminal if I had the ability to directly modify the components of my mind. They are unalterable because I am not able to physically manipulate the hardware, not because I wouldn’t alter them if I could (and saw a reason to).
Well, the pleasure center and the reward center are different things, but I take your meaning. I think that I could be conditioned to build a baby-mulching machine or a doomsday device. Why not? Other people have done it. Why would I assume that I’m that different from them?
EDIT TO ADD:
Even if I have a value that I can’t escape currently (like not killing people), that’s not to say that if I had the ability to physically modify the parts of my brain that held my values I wouldn’t do it for some reason.
My statement is stronger. If in your current state you don’t have any terminal moral values, then in your current state you would voluntarily accept to operate baby-mulching machines in exchange for the right amount of neural stimulation.
Now, I don’t happen to think this is true (because some “moral values” are biologically hardwired into humans), but this is a consequence of your position.
Again, you’ve pulled a statement out of a discussion the context of the behavior of a self-modifying AI. So, fine. In my current condition I wouldn’t build a baby mulcher. That doesn’t mean that I might not build a baby mucher if I had the ability to change my values. You might as well say that I terminally value not flying when I flap my arms. The thing you’re discussing just isn’t physically allowed. People terminally value only what they’re doing at any given moment because the laws of physics say that they have no choice.
As far as I know terminal values are things that are valuable in an of themselves. I don’t consider not building baby-mulchers to be valuable in and of itself. There may be some scenario in which building baby-mulchers is more valuable to me than not and in that scenario I would build one. Likewise with doomsday devices. It’s difficult to predict what that scenario would look like, but given that other humans have built them I assume that I would too. In those circumstances if I could turn off the parts of my brain that make me squeamish about doing that, I certainly would. I don’t think that not doing horrible things is valuable in and of itself, it’s just away of avoiding feeling horrible. If I could avoid feeling horrible and found value in doing horrible things, then I would probably do them.
People terminally value only what they’re doing at any given moment because the laws of physics say that they have no choice.
Huh? That makes no sense. How do you define “terminal value”?
In the statement that you were responding to, I was defining it the way you seemed to when you said that “some “moral values” are biologically hardwired into humans.” You were saying that given the current state of their hardware, their inability to do something different makes the value terminal. This is analogous to saying that given the current state of the universe, whatever a person is doing at any given moment is a terminal value because of their inability to do something different.
I don’t think that not doing horrible things is valuable in and of itself, it’s just away of avoiding feeling horrible.
OK. I appreciate you biting the bullet.
You were saying that given the current state of their hardware, their inability to do something different makes the value terminal.
No, that is NOT what I am saying. “Biologically hardwired” basically means you are born with these values and while overcoming them is possible, it will take extra effort. It certainly does not mean that you have no choice. Humans do something other than what their biologically hardwired terminal values tell them on a very regular basis. One reason for this is that values are many and they tend not to be consistent.
You wouldn’t consider the cluster of things which typically fall under morality to be terminal values, which you care about irrespective of your internal mental state?
I don’t consider morality to be a terminal value. I would point out that even a value that I have that I can’t give up right now wouldn’t necessarily be terminal if I had the ability to directly modify the components of my mind. They are unalterable because I am not able to physically manipulate the hardware, not because I wouldn’t alter them if I could (and saw a reason to).
That implies that you would do anything at all (baby-mulching machines, nuke the world, etc.) for sufficient stimulation of your pleasure center.
Well, the pleasure center and the reward center are different things, but I take your meaning. I think that I could be conditioned to build a baby-mulching machine or a doomsday device. Why not? Other people have done it. Why would I assume that I’m that different from them?
EDIT TO ADD: Even if I have a value that I can’t escape currently (like not killing people), that’s not to say that if I had the ability to physically modify the parts of my brain that held my values I wouldn’t do it for some reason.
My statement is stronger. If in your current state you don’t have any terminal moral values, then in your current state you would voluntarily accept to operate baby-mulching machines in exchange for the right amount of neural stimulation.
Now, I don’t happen to think this is true (because some “moral values” are biologically hardwired into humans), but this is a consequence of your position.
Again, you’ve pulled a statement out of a discussion the context of the behavior of a self-modifying AI. So, fine. In my current condition I wouldn’t build a baby mulcher. That doesn’t mean that I might not build a baby mucher if I had the ability to change my values. You might as well say that I terminally value not flying when I flap my arms. The thing you’re discussing just isn’t physically allowed. People terminally value only what they’re doing at any given moment because the laws of physics say that they have no choice.
I think you’re confusing “terminal” and “immutable”. Terminal values can and do change.
And why is that? Do you, perchance, have some terminal moral value which disapproves?
Huh? That makes no sense. How do you define “terminal value”?
As far as I know terminal values are things that are valuable in an of themselves. I don’t consider not building baby-mulchers to be valuable in and of itself. There may be some scenario in which building baby-mulchers is more valuable to me than not and in that scenario I would build one. Likewise with doomsday devices. It’s difficult to predict what that scenario would look like, but given that other humans have built them I assume that I would too. In those circumstances if I could turn off the parts of my brain that make me squeamish about doing that, I certainly would. I don’t think that not doing horrible things is valuable in and of itself, it’s just away of avoiding feeling horrible. If I could avoid feeling horrible and found value in doing horrible things, then I would probably do them.
In the statement that you were responding to, I was defining it the way you seemed to when you said that “some “moral values” are biologically hardwired into humans.” You were saying that given the current state of their hardware, their inability to do something different makes the value terminal. This is analogous to saying that given the current state of the universe, whatever a person is doing at any given moment is a terminal value because of their inability to do something different.
OK. I appreciate you biting the bullet.
No, that is NOT what I am saying. “Biologically hardwired” basically means you are born with these values and while overcoming them is possible, it will take extra effort. It certainly does not mean that you have no choice. Humans do something other than what their biologically hardwired terminal values tell them on a very regular basis. One reason for this is that values are many and they tend not to be consistent.
So how does this relate to the discussion on AI?