Hm. So the challenge here, then, is to construct an argument with a premise that “Nothing really matters” and a conclusion of “Existential Angst” that would negate the standard objection of “If nothing matters, then I am allowed to have subjective values that things do matter, and I am not provably wrong in doing so.”
This seems like it will take a bit of mental gymnastics: the bottom line of the argument is already filled in, but I will try.
So, somehow, it has to be argued that even if nothing matters, that you are not allowed to just posit subjective values.
I suppose the best argument for that might go something like:
You are not a purely rational agent that can divide things so neatly. Your brain is for chasing gazelle and arguing about who gets the most meat, not high-level theories of value. As such, it doesn’t parse the difference between “subjective” and “objective” value systems in the way you want. When you say “subjective values” your brain doesn’t really interpret it that way, it treats it in a manner identical to how it would treat an objective value system. What you’re really doing is guarding your brain against existential angst by giving it an unfalsifiable “angst protection” term by putting an artifical label of “subjective” in front of your value system. It still doesn’t really matter, you are just cleverly fooling yourself because you don’t want to face your angst. That’s fine if all you care about is use, but you claim to care about truth: and the truth is that nothing matters, including your so-called “Meaningful personal relationships,” “Doing good,” or “Being happy.”
Hm. That wasn’t actually as difficult as I thought it would be. Thank you, brain, for being so good at clever arguments.
I seem to have constructed something of a “stronger zombie opponent” here. I’ve also figured out its weak point, but I am curious to see who kills it and how.
Heh, yeah, it’s kind of an odd case in which the fact that you want to write a particular bottom line before you begin is quite possibly an argument for that bottom line?
Quite honestly that zombie doesn’t even seem to be animated to me. My ability to discriminate ‘ises’ and ‘oughts’ as two distinct types feels pretty damn natural and instinctive to me.
Ah. Perhaps I talked around the issue of that zombie, rather than at it directly:
The specific issue I was getting at is that even if your moral “ought” isn’t based in some state of the world (an “is”), you will treat it like it is: you will act like your “oughts” are basic, even when they aren’t. You will treat your oughts as if they matter outside of your own head, because as a human brain you are good at fooling yourself.
To put it another way: would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?
If the answer is no, then you are treating your ‘subjective’ values the same as you would ‘objective’ ones. So applying the ‘subjective’ label doesn’t pull any weight: your values don’t really matter, and thus depression and angst are simply the natural path to take once you know how the world works.
(Note: I am not actually arguing something I believe here: I am just letting the zombie get in a few good swings. I don’t actually think it is true and already have a couple of tracks against it. But I would be a poor rhetorical necromancer if I let my argument-zombies fall apart too easily.)
would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?
I… can’t even answer that, because I can’t conceive of a way in which that COULD be true. What would it even MEAN?
Still seems like a harmless corpse to me. I mean, not to knock your frankenskillz, but it seems like sewing butterfly wings onto a dead earthworm and putting it on top of a 9 volt battery. XD
I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose “value” onto things: the things you value are such because they have more Valutrons, and the things you don’t do not. Or imagine that Omega comes up to you and tells you that there is a “true value” associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?
...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
That’s one of the zombie’s weak points, anyway.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
Maybe the same principles that identify wirehead-type states as undesirable under our values, would, if completely and consistently applied, identify everything and anything possible as in the class of wirehead-type states.
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
The subjective/objective opposition in theories of value is somewhat subverted by the existence of subjective facts. There are qualia of decision-making. These include at a minimum any emotional judgments which play a role in forming a preference or a decision, and there may be others.
Whether or not there is a sense of moral rightness, distinct from emotional, logical, and aesthetic judgments, is a basic question. If the answer is yes, that implies a phenomenological moral realism—there is a separate category of moral qualia. The answer is no in various psychologically reductive theories of morality. Hedonism, as a descriptive (not yet prescriptive) theory of human moral psychology, says that all moral judgments are really pleasure/pain judgments. Nietzsche offered a slightly different reduction of everything, to “will to power”, which he regarded as even more fundamental.
How this subjective, phenomenological analysis relates to the computational, algorithmic, decision-theoretic analysis of decision-making, is one of the great unaddressed questions, in all the discussion on this site about morals and preferences and utility functions. Of course, it’s an aspect of the general ontological problem of consciousness. And it ought to be relevant to the discussion you’re having with Annie… if you can find a way to talk about it.
Hm. So the challenge here, then, is to construct an argument with a premise that “Nothing really matters” and a conclusion of “Existential Angst” that would negate the standard objection of “If nothing matters, then I am allowed to have subjective values that things do matter, and I am not provably wrong in doing so.”
This seems like it will take a bit of mental gymnastics: the bottom line of the argument is already filled in, but I will try.
So, somehow, it has to be argued that even if nothing matters, that you are not allowed to just posit subjective values.
I suppose the best argument for that might go something like:
You are not a purely rational agent that can divide things so neatly. Your brain is for chasing gazelle and arguing about who gets the most meat, not high-level theories of value. As such, it doesn’t parse the difference between “subjective” and “objective” value systems in the way you want. When you say “subjective values” your brain doesn’t really interpret it that way, it treats it in a manner identical to how it would treat an objective value system. What you’re really doing is guarding your brain against existential angst by giving it an unfalsifiable “angst protection” term by putting an artifical label of “subjective” in front of your value system. It still doesn’t really matter, you are just cleverly fooling yourself because you don’t want to face your angst. That’s fine if all you care about is use, but you claim to care about truth: and the truth is that nothing matters, including your so-called “Meaningful personal relationships,” “Doing good,” or “Being happy.”
Hm. That wasn’t actually as difficult as I thought it would be. Thank you, brain, for being so good at clever arguments.
I seem to have constructed something of a “stronger zombie opponent” here. I’ve also figured out its weak point, but I am curious to see who kills it and how.
Heh, yeah, it’s kind of an odd case in which the fact that you want to write a particular bottom line before you begin is quite possibly an argument for that bottom line?
Quite honestly that zombie doesn’t even seem to be animated to me. My ability to discriminate ‘ises’ and ‘oughts’ as two distinct types feels pretty damn natural and instinctive to me.
What bothered me was the question of whether my oughts were internally inconsistent.
Ah. Perhaps I talked around the issue of that zombie, rather than at it directly:
The specific issue I was getting at is that even if your moral “ought” isn’t based in some state of the world (an “is”), you will treat it like it is: you will act like your “oughts” are basic, even when they aren’t. You will treat your oughts as if they matter outside of your own head, because as a human brain you are good at fooling yourself.
To put it another way: would you treat your oughts any different if they DID turn out to be based in some metaphysically basic truth somehow?
If the answer is no, then you are treating your ‘subjective’ values the same as you would ‘objective’ ones. So applying the ‘subjective’ label doesn’t pull any weight: your values don’t really matter, and thus depression and angst are simply the natural path to take once you know how the world works.
(Note: I am not actually arguing something I believe here: I am just letting the zombie get in a few good swings. I don’t actually think it is true and already have a couple of tracks against it. But I would be a poor rhetorical necromancer if I let my argument-zombies fall apart too easily.)
I… can’t even answer that, because I can’t conceive of a way in which that COULD be true. What would it even MEAN?
Still seems like a harmless corpse to me. I mean, not to knock your frankenskillz, but it seems like sewing butterfly wings onto a dead earthworm and putting it on top of a 9 volt battery. XD
I can conjure a few scenarios. Imagine that you expected to find Valutrons- subatomic particles that impose “value” onto things: the things you value are such because they have more Valutrons, and the things you don’t do not. Or imagine that Omega comes up to you and tells you that there is a “true value” associated with ordinary objects. If you discovered that your values were based in something that was non-subjective, would you treat those oughts any differently?
...I guess I would get valutron-dynamics worked out, and engineer systems to yield maximum valutron output?
Except that I’d only do that if I believed it would be a handy shortcut to a higher output from my internal utility function that I already hold as subjectively “correct”.
ie, if valutron physics somehow gave a low yield for making delicious food for friends, and a high yield for knifing everyone, I would still support good hosting and oppose stabbytimes.
So the answer is, apparently, even if the scenarios you conjure came to pass, I would still treat oughts and is-es as distinctly different types from each other, in the same way I do now.
But I still can’t equate those scenarios with giving any meaning to “values having some metaphysically basic truth”.
Although the valutron-engineering idea seems like a good idea for a fun absurdist sci-fi short story =]
I also agree that it would make a great absurdist sci-fi story. Reminds me of something Vonnegut would have written.
Well, the trick would be that it couldn’t be counter to experience: you would never find yourself actually valuing, say, cooking-for-friends as more valuable than knifing-your-friends if knifing-your-friends carried more valutrons. You might expect more from cooking-for-friends and be surprised at the valutron-output for knifing-your-friends. In fact, that’d be one way to tell the difference between “valutrons cause value” and “I value valutrons.”: in the latter scenario you might be surprised by valutron output, but not by your subjective values. In the former, you would actually be surprised to find that you valued certain things, which correlated with a high valutron output.
But that’s pretty much there. We don’t find ourselves surprised by ethics on the basis of some external condition that we are checking against, so we can conclude that there IS a difference between “subjective” and “objective” ethics. In fact, when humans try to make systems that way, we end up revising them as our subjective values change, which you would not expect if our values were actually coming from an objective source.
That’s one of the zombie’s weak points, anyway.
Honestly if I ever found my values following valutron outputs in unexpected ways like that, I’d suspect some terrible joker from beyond the matrix was messing with the utility function in my brain and quite possibly with the physics too.
Right.
Which very well describes the way the type distinction of “objective” and “subjective” feels intuitively obvious, and logically sound. Alternatives aint conceivable.
It just doesn’t seem like much of a zombie. But that makes sense as it wasn’t discovered by someone by trying to pin down an honest sense of fear.
My zombie originally was, and I think I can sum it up as the thought that:
(The simple “enjoy broccoli” was an analogy for the entire complicated human CEV.
I threw in a reference to “meaningful human relationships” not because that’s my problem anymore than the average person, but because “other people” seems to have something important to do with distinguishing between a happiness we actually want and an undesirable wirehead-type state.)
How do you kill that zombie? My own solution more-or-less worked, but it was rambling and badly articulated and generally less impressive than I might like.
And yeah, the philosophical problem is at base an existential angst factory. The real problem to solve for me is, obviously, getting around the disappointing life setback I mentioned.
But laying this philosophical problem to rest with a nice logical piledriver that’s epic enough to friggin incinerate it would be one thing in service of that goal.
The subjective/objective opposition in theories of value is somewhat subverted by the existence of subjective facts. There are qualia of decision-making. These include at a minimum any emotional judgments which play a role in forming a preference or a decision, and there may be others.
Whether or not there is a sense of moral rightness, distinct from emotional, logical, and aesthetic judgments, is a basic question. If the answer is yes, that implies a phenomenological moral realism—there is a separate category of moral qualia. The answer is no in various psychologically reductive theories of morality. Hedonism, as a descriptive (not yet prescriptive) theory of human moral psychology, says that all moral judgments are really pleasure/pain judgments. Nietzsche offered a slightly different reduction of everything, to “will to power”, which he regarded as even more fundamental.
How this subjective, phenomenological analysis relates to the computational, algorithmic, decision-theoretic analysis of decision-making, is one of the great unaddressed questions, in all the discussion on this site about morals and preferences and utility functions. Of course, it’s an aspect of the general ontological problem of consciousness. And it ought to be relevant to the discussion you’re having with Annie… if you can find a way to talk about it.