We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note:
I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
Would making paperclips become valuable if we created a paperclip maximiser?
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
An entity that didn’t care about goals would never do anything at all.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
What is valuable is what we value, because if we didn’t value it, we wouldn’t have invented the word “valuable” to describe it.
By analogy, suppose my favourite colour is red, but I speak a language with no term for “red”. So I invent “xylbiz” to refer to red things; in our language, it is pretty much a synonym for “red”. All objects that are xylbiz are my favourite colour. “By definition” to some degree, since my liking red is the origin of the definition “xylbiz = red”. But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that’s how my mind is built.
It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that “xylbiz” refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness.
Note that if my favourite colour changed somehow, so now I like purple and invent the word “blagg” for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about “xylbiz” (actually, being human, would probably just redefine “xylbiz” to mean purple rather than define a new word).
By the way, the philosopher would probably ask “what evidence is there that we should value what mental states feel like from the inside?”
Just to be clear, I don’t think you’re disagreeing with me.
We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note: I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
See also
Hard to be original anymore. Which is a good sign!
What is valuable is what we value, because if we didn’t value it, we wouldn’t have invented the word “valuable” to describe it.
By analogy, suppose my favourite colour is red, but I speak a language with no term for “red”. So I invent “xylbiz” to refer to red things; in our language, it is pretty much a synonym for “red”. All objects that are xylbiz are my favourite colour. “By definition” to some degree, since my liking red is the origin of the definition “xylbiz = red”. But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that’s how my mind is built.
It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that “xylbiz” refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness.
Note that if my favourite colour changed somehow, so now I like purple and invent the word “blagg” for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about “xylbiz” (actually, being human, would probably just redefine “xylbiz” to mean purple rather than define a new word).
By the way, the philosopher would probably ask “what evidence is there that we should value what mental states feel like from the inside?”