I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose “value” onto things: the things you value are such because they have more Valutrons, and the things you don’t do not. Or imagine that Omega comes up to you and tells you that there is a “true value” associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?
...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
That’s one of the zombie’s weak points, anyway.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
The subjective/objective opposition in theories of value is somewhat subverted by the existence of subjective facts. There are qualia of decision-making. These include at a minimum any emotional judgments which play a role in forming a preference or a decision, and there may be others.
Whether or not there is a sense of moral rightness, distinct from emotional, logical, and aesthetic judgments, is a basic question. If the answer is yes, that implies a phenomenological moral realism—there is a separate category of moral qualia. The answer is no in various psychologically reductive theories of morality. Hedonism, as a descriptive (not yet prescriptive) theory of human moral psychology, says that all moral judgments are really pleasure/pain judgments. Nietzsche offered a slightly different reduction of everything, to “will to power”, which he regarded as even more fundamental.
How this subjective, phenomenological analysis relates to the computational, algorithmic, decision-theoretic analysis of decision-making, is one of the great unaddressed questions, in all the discussion on this site about morals and preferences and utility functions. Of course, it’s an aspect of the general ontological problem of consciousness. And it ought to be relevant to the discussion you’re having with Annie… if you can find a way to talk about it.
I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose “value” onto things: the things you value are such because they have more Valutrons, and the things you don’t do not. Or imagine that Omega comes up to you and tells you that there is a “true value” associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?
...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
I also agree that it would make a great absurdist sci-fi story. Reminds me of something Vonnegut would have written.
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
That’s one of the zombie’s weak points, anyway.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
The subjective/objective opposition in theories of value is somewhat subverted by the existence of subjective facts. There are qualia of decision-making. These include at a minimum any emotional judgments which play a role in forming a preference or a decision, and there may be others.
Whether or not there is a sense of moral rightness, distinct from emotional, logical, and aesthetic judgments, is a basic question. If the answer is yes, that implies a phenomenological moral realism—there is a separate category of moral qualia. The answer is no in various psychologically reductive theories of morality. Hedonism, as a descriptive (not yet prescriptive) theory of human moral psychology, says that all moral judgments are really pleasure/pain judgments. Nietzsche offered a slightly different reduction of everything, to “will to power”, which he regarded as even more fundamental.
How this subjective, phenomenological analysis relates to the computational, algorithmic, decision-theoretic analysis of decision-making, is one of the great unaddressed questions, in all the discussion on this site about morals and preferences and utility functions. Of course, it’s an aspect of the general ontological problem of consciousness. And it ought to be relevant to the discussion you’re having with Annie… if you can find a way to talk about it.