Can someone please react to my gut reaction about virtue ethics? I’d love some feedback if I misunderstand something.
It seems to me that most virtues are just instrumental values that make life convenient for people, especially those with unclear or intimidating terminal values.
The author says this about protagonist Tris:
Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’
I think maybe the deeper ‘become good’ node (and its huge overlap with the one other node that’s equally deep-seated: ‘become happy’) is actually the “deeper motivation” at the core of virtue ethics.
Then two things account for the individual variance in which virtues are pursued:
(1) Individuals have different amounts of overlap between their ‘become good’ and ‘become happy’ nodes.
(2) Different people find happiness in different things.
One virtue the author wants to realize in herself is loyalty:
I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued.
I think loyalty is generally a useful way to achieve individual happiness (self-centered) and overall goodness (others-centered), but not always, and I’m guessing that in certain situations, the author would abandon the loyalty virtue to pursue the underlying preferences of happiness and goodness.
So maybe:
Cultivating a certain virtue in yourself is an example of an instrumental value
Innate preferences = some combination of personal happiness + goodness
Terminal values = arbitrary goals (often unidentified) somehow based on these preferences
At first glance, I like virtue ethics a lot. I’d like to pursue some virtues of my own, but ones that are carefully selected based on my terminal values, if I can summon the necessary introspective powers to figure them out. Until then, I’ll say vaguely that my terminal value is preference fulfillment, and just choose some virtues that I think would efficiently fulfill my preferences. So some of my instrumental values will be virtues, which I can either pursue instinctively and through consequentialist-style opportunity-cost analyses.
(Note: On the surface I might look more altruistic because there’s a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I’m 100% selfish.)
My lazy terminal value: satisfy my preferences of happiness and goodness
My chosen virtue (aka instrumental goal): become someone who cares about the environment
If caring about the environment is my instrumental goal, I can instinctively pick up trash, conserve energy, use a reusable water bottle; i.e. do things environmentally conscious people do.
I can also perform opportunity cost analyses to best realize my chosen virtue, then through it, my terminal value. For example, I could stop showering. Or, I could apparently have the same effect by eating six fewer hamburgers in a year. Personally, I prefer showering to eating hamburgers, tasty as they are, so I cut significantly back on my meat consumption but continue to take showers without worrying about their length.
Result: My innate preferences for happiness and goodness are harmoniously satisfied.
Is this allowed? Is there room for consequential reasoning in virtue ethics? Can virtue ethics be useful for consequentialists? Can I please be both a consequentialist and a virtue ethicist?
I feel like the most likely objection to this idea will be that true altruism does not exist as an innate preference. I have some tentative thoughts here too, if anyone is curious. It seems like the easiest explanation for why rational people don’t always do the things that they feel will bring them the greatest personal happiness.
Can someone please react to my gut reaction about virtue ethics? I’d love some feedback if I misunderstand something.
It seems to me that most virtues are just instrumental values that make life convenient for people, especially those with unclear or intimidating terminal values.
The author says this about protagonist Tris:
I think maybe the deeper ‘become good’ node (and its huge overlap with the one other node that’s equally deep-seated: ‘become happy’) is actually the “deeper motivation” at the core of virtue ethics.
Then two things account for the individual variance in which virtues are pursued:
(1) Individuals have different amounts of overlap between their ‘become good’ and ‘become happy’ nodes.
(2) Different people find happiness in different things.
One virtue the author wants to realize in herself is loyalty:
I think loyalty is generally a useful way to achieve individual happiness (self-centered) and overall goodness (others-centered), but not always, and I’m guessing that in certain situations, the author would abandon the loyalty virtue to pursue the underlying preferences of happiness and goodness.
So maybe:
Cultivating a certain virtue in yourself is an example of an instrumental value
Innate preferences = some combination of personal happiness + goodness
Terminal values = arbitrary goals (often unidentified) somehow based on these preferences
At first glance, I like virtue ethics a lot. I’d like to pursue some virtues of my own, but ones that are carefully selected based on my terminal values, if I can summon the necessary introspective powers to figure them out. Until then, I’ll say vaguely that my terminal value is preference fulfillment, and just choose some virtues that I think would efficiently fulfill my preferences. So some of my instrumental values will be virtues, which I can either pursue instinctively and through consequentialist-style opportunity-cost analyses.
Example:
My innate preferences: maybe 98% happiness-driven (selfishness) + 2% goodness-driven (altruism).
(Note: On the surface I might look more altruistic because there’s a LOT of overlap between decisions that are good for others and decisions that make me feel good. Or, you could see the giant overlap and assume I’m 100% selfish.)
My lazy terminal value: satisfy my preferences of happiness and goodness
My chosen virtue (aka instrumental goal): become someone who cares about the environment
If caring about the environment is my instrumental goal, I can instinctively pick up trash, conserve energy, use a reusable water bottle; i.e. do things environmentally conscious people do.
I can also perform opportunity cost analyses to best realize my chosen virtue, then through it, my terminal value. For example, I could stop showering. Or, I could apparently have the same effect by eating six fewer hamburgers in a year. Personally, I prefer showering to eating hamburgers, tasty as they are, so I cut significantly back on my meat consumption but continue to take showers without worrying about their length.
Result: My innate preferences for happiness and goodness are harmoniously satisfied.
Is this allowed? Is there room for consequential reasoning in virtue ethics? Can virtue ethics be useful for consequentialists? Can I please be both a consequentialist and a virtue ethicist?
I feel like the most likely objection to this idea will be that true altruism does not exist as an innate preference. I have some tentative thoughts here too, if anyone is curious. It seems like the easiest explanation for why rational people don’t always do the things that they feel will bring them the greatest personal happiness.