This is probably the first “philosophical” thought I’ve had in my life
Haha, good one. Humor is often a good way to open :)
happy
I assume you mean “desirability of mind-state”. People associate the word “happy” with a lot of different things, so I think it’s worth giving some sort of operational definition (could probably be informal though).
So I suspect a certain commonality among human beings in that we all actually share the same terminal values, or terminal virtues.
I think a quick primer on consequentialism vs. virtue ethics would be appropriate. a) Some people might not know the difference. b) It’s a key part of what you’re writing about and so a refresher feels like it’d be useful.
You use the phrase “terminal virtues” without first defining it. I don’t think it’s an “official” term, and I don’t think it “has enough behind it” where people could infer what it means.
I think you should more clearly distinguish between what’s a question for the social sciences, and what’s a question for philosophy.
Social sciences:
1) Do people claim to be consequentialists, or virtue ethicists?
2) Do people act like consequentialists, or virtue ethicists? Ie. what would the decisions they make imply about their beliefs?
3) What are the fundamental things that drive/motivate people? Can it always be traced back to happiness or goodness (as you define them)? Or are there things that drive people independent of happiness and goodness? Example: say that someone claims to value truth. Would they tell the truth if they knew for a fact that it would lead to less happiness and goodness in the long-run?
One of the key points you seem to be making is that as far as 3) goes, for the overwhelming majority of people, their drives/motives can be traced to happiness or goodness. But what does it mean for a drive to be traced to something? Well, my thought is that drives depend on what we truly care about. We may have a drive for X, but if we only care about X to the extent that it leads to Y, then Y is what we truly care about, and I predict that the drive for X will only be as strong as the expectation that X → Y (although I’m sure the relationship isn’t perfectly linear; humans are weird).
However, this is a question for the social sciences. The way to figure it out would be to study it scientifically. Ie. by observing how people act and feel in different situations. In particular, since it involves people, the domain would be one of the social sciences.
Philosophy:
1) Does anything have “intrinsic value”?
2) What does having “intrinsic value” even mean exactly? How would the world look if things had intrinsic value? How would it look if things didn’t have intrinsic value?
3) What about morality? What does it mean for something to be moral/good? How do these rules get determined?
My stance is that a) the words I mention above are hard to use because they don’t have precise and commonly accepted definitions, and b) terminal goals are completely arbitrary. Ie. you can’t say that killing people is a bad terminal goal. You can only say that “killing people is bad if… you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn’t help us pick our ends.
I don’t want to believe this though. I’ve been conditioned to feel like ends are good/bad, despite my understanding. And I’ve been conditioned to seek purpose, ie. to find and seek “good” ends. Because of the way I’ve been conditioned, I don’t like believing that goals are completely arbitrary, but unfortunately it’s the view that makes the most sense to my by very large margins.
Often, but not always, these two desires go hand-in-hand.
I don’t think it’s completely clear what this means. I think you mean “doing good tends to also make us happy”. You do end up saying this, but I think you say it two sentences too late. Ie. I’d say “doing good tends to also make us happy” before using the hand-in-hand phrase, and before talking about the “components” of happiness (I’d use the word determinants, which is a bit of a nitpick).
psychological motivators
I have a feeling that this isn’t the right term. Regardless, I’d explain what you mean by it.
handing out money through personal-happiness-optimizing random acts of kindness
Aka warm fuzzies.
As rational human beings, we occasionally will consciously choose to inefficiently optimize our personal happiness for the sake of others.
Very important point: If you’re claiming that doing so is rational, then one of two things must be the case:
1) You alter your claim to say that it’s rational… presuming a terminal value of goodness.
2) You argue that a terminal value of goodness is rational.
As I read, I couldn’t help but think that virtue ethics and consequentialism are not really so different at heart.
Another very important point: distinguish theory from practice.
As I understand it:
In theory, they’re complete opposites. A virtue ethicist would say, “X is just inherently virtuous. It doesn’t matter what the consequences are.” A consequentialist would say that it does depend on the consequences. Someone might say, “But consequentialists have to choose terminal values don’t they?” My response, “Yes, but they admit that this is an arbitrary decision. They don’t claim that these terminal values are virtuous (as I understand it).”
In practice, virtue ethicists often pursue things to achieve the end of being virtuous, and their virtues are often very very similar to the terminal values of consequentialists. At the end of the day, their virtues are pretty much just happiness and goodness. And at the end of the day, these are often the terminal values that consequentialists choose. I think that this is the point that you were making. And I thank you for making it, because I didn’t really pay much attention to that fact. My overly literal and reductionist approach failed to lead me to notice how important the practical outcome is. Furthermore, I’m not sure how true this is, but it seems that in practice, a lot of consequentialists believe that their terminal goals do posses inherent virtue, in which case the lines do get really fuzzy between consequentialism and virtue ethicism.
Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you’re right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I’ll go change that word “philosophical” to “intellectual” now.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions. I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I’m trying to do with this article is help figure out where we should draw a boundary.
b) terminal goals are completely arbitrary. Ie. you can’t say that killing people is a bad terminal goal. You can only say that “killing people is bad if… you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn’t help us pick our ends.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn’t matter in the context of this discussion whether we classify killing as “wrong” or not, it matters what we do about it. In the real world, we’re motivated, by our own desires for personal happiness and goodness, to lock up killers.
Very important point: If you’re claiming that doing so is rational, then one of two things must be the case:
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
The main takeaway I’m getting from your advice is that I should try to make it clear in this article that I’m not attempting to prove a point, but rather just to “carve along the joints” and offer a clearer way of looking at things by lumping happiness and goodness into the same category.
Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.
Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category?
The latter.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions.
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness.
Well said. In your article, I think that some of the language implies otherwise, but I don’t like talking about semantics either and I think the important point is that this is clear now.
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, and you can’t say that they’re inherently good/bad. But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.”
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
Hah, and I thought I was literal. I guess I’m interested in knowing the “in theory” just so I can make connections (like adherents to different moral systems have different tendencies in terms of making decisions consciously vs. subconsciously) to the “in practice”
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
But at the same time, you’ve really helped me figure out my point, which wouldn’t have happened if you said “nice article, I get what you’re saying here.” In regular life conversations, it’s better to just think about what someone meant and reply to that, but for an article like this, it was totally worthwhile for you to reply to what I actually said and share what you thought it implied.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, [and you can’t say that they’re inherently good/bad.] But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
The bracketed part I don’t care about. Discussing “inherently good/bad” seems like a philosophical debate that hinges on our ideas of “inherent.” The rest, I agree :) We seem to choose which actions to take arbitrarily, and through those actions we seemingly arbitrarily position ourselves somewhere on the happiness-goodness continuum.
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.
Part 1
Haha, good one. Humor is often a good way to open :)
I assume you mean “desirability of mind-state”. People associate the word “happy” with a lot of different things, so I think it’s worth giving some sort of operational definition (could probably be informal though).
I think a quick primer on consequentialism vs. virtue ethics would be appropriate. a) Some people might not know the difference. b) It’s a key part of what you’re writing about and so a refresher feels like it’d be useful.
You use the phrase “terminal virtues” without first defining it. I don’t think it’s an “official” term, and I don’t think it “has enough behind it” where people could infer what it means.
I think you should more clearly distinguish between what’s a question for the social sciences, and what’s a question for philosophy.
Social sciences:
1) Do people claim to be consequentialists, or virtue ethicists?
2) Do people act like consequentialists, or virtue ethicists? Ie. what would the decisions they make imply about their beliefs?
3) What are the fundamental things that drive/motivate people? Can it always be traced back to happiness or goodness (as you define them)? Or are there things that drive people independent of happiness and goodness? Example: say that someone claims to value truth. Would they tell the truth if they knew for a fact that it would lead to less happiness and goodness in the long-run?
One of the key points you seem to be making is that as far as 3) goes, for the overwhelming majority of people, their drives/motives can be traced to happiness or goodness. But what does it mean for a drive to be traced to something? Well, my thought is that drives depend on what we truly care about. We may have a drive for X, but if we only care about X to the extent that it leads to Y, then Y is what we truly care about, and I predict that the drive for X will only be as strong as the expectation that X → Y (although I’m sure the relationship isn’t perfectly linear; humans are weird).
However, this is a question for the social sciences. The way to figure it out would be to study it scientifically. Ie. by observing how people act and feel in different situations. In particular, since it involves people, the domain would be one of the social sciences.
Philosophy:
1) Does anything have “intrinsic value”?
2) What does having “intrinsic value” even mean exactly? How would the world look if things had intrinsic value? How would it look if things didn’t have intrinsic value?
3) What about morality? What does it mean for something to be moral/good? How do these rules get determined?
My stance is that a) the words I mention above are hard to use because they don’t have precise and commonly accepted definitions, and b) terminal goals are completely arbitrary. Ie. you can’t say that killing people is a bad terminal goal. You can only say that “killing people is bad if… you want to promote a sane and happy world. (Instrumental) rationality is about being good at achieving our ends. But it doesn’t help us pick our ends.
I don’t want to believe this though. I’ve been conditioned to feel like ends are good/bad, despite my understanding. And I’ve been conditioned to seek purpose, ie. to find and seek “good” ends. Because of the way I’ve been conditioned, I don’t like believing that goals are completely arbitrary, but unfortunately it’s the view that makes the most sense to my by very large margins.
I don’t think it’s completely clear what this means. I think you mean “doing good tends to also make us happy”. You do end up saying this, but I think you say it two sentences too late. Ie. I’d say “doing good tends to also make us happy” before using the hand-in-hand phrase, and before talking about the “components” of happiness (I’d use the word determinants, which is a bit of a nitpick).
I have a feeling that this isn’t the right term. Regardless, I’d explain what you mean by it.
Aka warm fuzzies.
Very important point: If you’re claiming that doing so is rational, then one of two things must be the case:
1) You alter your claim to say that it’s rational… presuming a terminal value of goodness.
2) You argue that a terminal value of goodness is rational.
Another very important point: distinguish theory from practice.
As I understand it:
In theory, they’re complete opposites. A virtue ethicist would say, “X is just inherently virtuous. It doesn’t matter what the consequences are.” A consequentialist would say that it does depend on the consequences. Someone might say, “But consequentialists have to choose terminal values don’t they?” My response, “Yes, but they admit that this is an arbitrary decision. They don’t claim that these terminal values are virtuous (as I understand it).”
In practice, virtue ethicists often pursue things to achieve the end of being virtuous, and their virtues are often very very similar to the terminal values of consequentialists. At the end of the day, their virtues are pretty much just happiness and goodness. And at the end of the day, these are often the terminal values that consequentialists choose. I think that this is the point that you were making. And I thank you for making it, because I didn’t really pay much attention to that fact. My overly literal and reductionist approach failed to lead me to notice how important the practical outcome is. Furthermore, I’m not sure how true this is, but it seems that in practice, a lot of consequentialists believe that their terminal goals do posses inherent virtue, in which case the lines do get really fuzzy between consequentialism and virtue ethicism.
Thanks for the tips! Adding some a brief primer on virtue ethics and consequentialism is a good idea, and I think you’re right that this whole idea is more relevant to the social sciences than philosophy. Did you actually want answers to those questions, or were they just to help show me the kind of questions that belong in each category? Great distinction at any rate, I’ll go change that word “philosophical” to “intellectual” now.
I think you noticed, or at least, you’ve now led me to notice, that I’m not really interested in the “in theory” at all, or in struggling over definitions. I’m just trying to show that what is actually happening “in practice” and suggest that whether someone calls himself a virtue ethicist or a consequentialist doesn’t change the fact that he is psychologically motivated (for lack of a better term) to pursue happiness and goodness. I think what I’m trying to do with this article is help figure out where we should draw a boundary.
I think this might have been my whole point, that our real ends aren’t as arbitrary as we think. It seems to me that in practice there are really just two ends that humans pursue. Nothing else seems like an end-in-itself. Killing people can be an instrumental goal that someone consciously or subconsciously thinks will make him happy, that will lead him to his optimal mind-state. He might be wrong about this; it might not actually lead him to his optimal mind-state. Or maybe it does. Either way, it doesn’t matter in the context of this discussion whether we classify killing as “wrong” or not, it matters what we do about it. In the real world, we’re motivated, by our own desires for personal happiness and goodness, to lock up killers.
But I’m not claiming it’s rational… I’m not claiming anything, and I’m not arguing anything or proving any point. I’m just describing how I observe that people who seem very rational can still maximize their personal happiness inefficiently. The resulting idea is that goodness seems like an end-in-itself, and a relatively universal one, so we should recognize it as such.
The main takeaway I’m getting from your advice is that I should try to make it clear in this article that I’m not attempting to prove a point, but rather just to “carve along the joints” and offer a clearer way of looking at things by lumping happiness and goodness into the same category.
Perhaps one other way we could describe what is actually happening in practice would be to say that virtue ethicists pursue their terminal values more subconsciously while consequentialists pursue the same terminal values more consciously.
The latter.
I didn’t know you weren’t interested in it at all, but I knew you were more interested in the practice part. Come to think of it, I suspect that you’re exaggerating in saying that you don’t really care about it at all.
Well said. In your article, I think that some of the language implies otherwise, but I don’t like talking about semantics either and I think the important point is that this is clear now.
The other important point is that I’ve screwed up and need to be better. I have an instinct to interpret things literally. I also try hard to look for what people probably meant given more contextual-type clues, but I’ve partially failed in this instance, and I think that all the information was there for me to succeed.
I think that we agree, but let me just make sure: ends are arbitrary in the sense that you could pick whatever ends you want, and you can’t say that they’re inherently good/bad. But they aren’t arbitrary in the sense that what actually drives us isn’t arbitrary at all. Agree?
Let me try to rephrase this to see if I understood and agree: “People who seem very rational seem to act in ways that don’t maximize their personal happiness. One possibility is that they are trying to optimize for personal happiness but failing. I think it’s more likely that they are optimizing for goodness in addition to happiness. Furthermore, this seems to be true for a lot of people.”
Hah, and I thought I was literal. I guess I’m interested in knowing the “in theory” just so I can make connections (like adherents to different moral systems have different tendencies in terms of making decisions consciously vs. subconsciously) to the “in practice”
But at the same time, you’ve really helped me figure out my point, which wouldn’t have happened if you said “nice article, I get what you’re saying here.” In regular life conversations, it’s better to just think about what someone meant and reply to that, but for an article like this, it was totally worthwhile for you to reply to what I actually said and share what you thought it implied.
The bracketed part I don’t care about. Discussing “inherently good/bad” seems like a philosophical debate that hinges on our ideas of “inherent.” The rest, I agree :) We seem to choose which actions to take arbitrarily, and through those actions we seemingly arbitrarily position ourselves somewhere on the happiness-goodness continuum.
Great wording! May I plagiarize?