...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
That’s one of the zombie’s weak points, anyway.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
I also agree that it would make a great absurdist sci-fi story. Reminds me of something Vonnegut would have written.
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
That’s one of the zombie’s weak points, anyway.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.