And there I was, convinced I had already looked this up and found that it was ‘dessert’. Thanks.
Midnight_Analyst
On Virtue
Strongly disagree. If you think of Virtue-with-a-capital-V as a Hellenic deity or a Platonic essence, then sure, Virtue doesn’t exist. But if “virtue” is like “honesty”, ie. a name for a particular pattern of action that is observable and which can be somewhat-objectively evaluated, then virtue absolutely exists, even if we might disagree about its identification.
I could rephrase my claim here by saying that utility is what actually matters, and that there isn’t actually a thing called virtue that matters in and of itself. I agree that virtue can still be a useful concept for generating utility. But it seems wrong to me to say ‘virtue can be a useful concept for generating utility, and therefore virtue has intrinsic value’. To me, it’s just a useful concept.
Now, about the actual meat of your argument: you seem to have inverted the classical way of thinking of the virtues, which is that a person is virtuous precisely insofar as their virtues are habituated. You suggest that we award Virtue Points for initially acquiring some skill which is both difficult and praiseworthy, but stop awarding them thereafter. To the classical authors, this is backwards: the state in which you effortlessly always do the right thing is virtue, while the struggles to get there are merely the process of its acquisition.
(And yeah, this implies that some people are naturally more virtuous than others, just like some people are taller or smarter than others. C’est la vie.)
I don’t think I agree with virtue ethics, but I definitely agree that the virtue that I’m talking about and the virtue that virtue ethicists talk about are different things.
I don’t think the models are totally incompatible: your Virtue Points are just the delta between different levels of virtue.
Interesting. I guess since I’m not a virtue ethicist, I think it’s probably not a good idea to define my virtue with respect to that kind of virtue.
Nonetheless, I think it’s more useful to model virtue as a state and not as a series of events, and to think of yourself and others as virtuous based on their expressed habits, not on how much they’re struggling to acquire them.
I think that utility comes from people’s expressed habits and not how much they’re struggling to acquire them, but my claim in the post is that we shouldn’t be internally praising expressed habits as opposed to effort to acquire them as much as we currently do. It’s probable that we also shouldn’t be externally praising expressed habits as much as we currently do either, although this isn’t a claim I explicitly make in the post, and one I’d need to think about more in order to be confident in.
From the perspective of the collective, the point of awarding Virtue Points is so that people know what traits to signal to remain in good graces with the community. From the perspective of the individual, a lot of the time that will feel like doing the Right Thing and not getting rewarded, due to phenomena discussed here.
I think with my post I’m pointing to something quite specific—a collection of ideas I expect to be somewhat useful in some not-particularly-well-thought-through way, by making sure that, to the extent that people think ‘person X deserves recompense’, they think so in a way that is fair. Basically, I think I’m trying to make sure people don’t get Utility Points and Virtue Points muddled up. I’m not going into whether people should mentally assign others Virtue Points, but I’m saying that most people will mentally assign others Virtue Points whatever anyone says, and that it’d probably be good for those people to be fairer in the way they do so.
I want to distinguish this mental action from the behaviours that result from it. I’m trying not to make claims directly about who and what should be outwardly praised.
On the connection to involuntary suffering, I have written the following in response to another comment:
I said ‘something akin to Virtue Points’, because I agree that someone getting hit is not actually more virtuous than someone not getting hit. I can understand why you would be very surprised if I thought that.
I think perhaps the whole post could be rewritten and framed in terms of suffering (or pain, or something of that nature), because I think that’s essentially what I’m getting at, and I feel it might be what Scott is getting at as well. I think it’s a highly common intuition that suffering is bad, and people often think that those who suffer deserve some kind of compensation, regardless of whether it was voluntary or not.
For example, say I have the following options:
A) Give a meal to a starving child.
B) Give something equally valuable to a healthy, non-starving child (note that obviously ‘something equally valuable’ doesn’t mean ‘a meal’ in this case, because a meal is a lot less valuable to a non-starving child than to a starving child. It’d probably have to be something more expensive than a meal.)
I’ve tried to define this such that, from a utilitarian perspective, there’s no difference between choosing option A and choosing option B.
I’d still rather choose A, because even though I know the Utility Points from both A and B are equal, there’s something about balancing out past suffering that makes me feel nice and fuzzy inside, and gives me a sense of justice. I expect this sense of justice is quite common, probably very common.
I should say that I think my post generally should not change the behaviour of people who hold strongly utilitarian views. But I think that even those who would consider themselves staunch utilitarians still possess to some degree these evolved intuitions about virtue and suffering, and to the extent that they do, I feel like it’d be nice (and probably valuable) for them (and everyone else) to be assigning their mental Virtue Points in ways that make more sense and are fairer.
Thanks! :)
I said ‘something akin to Virtue Points’, because I agree that someone getting hit is not actually more virtuous than someone not getting hit. I can understand why you would be very surprised if I thought that.
I think perhaps the whole post could be rewritten and framed in terms of suffering (or pain, or something of that nature), because I think that’s essentially what I’m getting at, and I feel it might be what Scott is getting at as well. I think it’s a highly common intuition that suffering is bad, and people often think that those who suffer deserve some kind of compensation, regardless of whether it was voluntary or not.
For example, say I have the following options:
A) Give a meal to a starving child.
B) Give something equally valuable to a healthy, non-starving child (note that obviously ‘something equally valuable’ doesn’t mean ‘a meal’ in this case, because a meal is a lot less valuable to a non-starving child than to a starving child. It’d probably have to be something more expensive than a meal.)
I’ve tried to define this such that, from a utilitarian perspective, there’s no difference between choosing option A and choosing option B.
I’d still rather choose A, because even though I know the Utility Points from both A and B are equal, there’s something about balancing out past suffering that makes me feel nice and fuzzy inside, and gives me a sense of justice. I expect this sense of justice is quite common, probably very common.
I should say that I think my post generally should not change the behaviour of people who hold strongly utilitarian views. But I think that even those who would consider themselves staunch utilitarians still possess to some degree these evolved intuitions about virtue and suffering, and to the extent that they do, I feel like it’d be nice (and probably valuable) for them (and everyone else) to be assigning their mental Virtue Points in ways that make more sense and are fairer.
Yeah so on the question of the effects of mentally assigning Virtue Points, I think the extent to which the ideas of my post should change behaviour, and whether that change would be good, is unclear. I wrote the post under the assumption that it’s be better for us to have this fairer understanding of how the amount of suffering involved in a task can be drastically different for different people depending on their existing abilities. I feel like it’s important for society to realise this, and I feel like we’ve only partially realised it at the moment. But possibly this isn’t the case, and I need to think about it more. I’m open to the idea that actually the way people currently assign Virtue Points actually shouldn’t be meddled with (which is why my post is more of a ‘starting point for discussion’ than ‘thing I am completely sure about’). I think you’re right to see the effects (rather than the mental action itself) as the thing that is actually important at the end of the day.
On involuntary suffering, having thought about this a bit more, I suppose the phrase ‘something akin to Virtue Points’ does imply that I think ‘Virtue Points’ would be an okay-ish name for the kind of thing I’m pointing to in the case of involuntary suffering, which is not the case. I do agree that Virtue Points is not a good name for that. I was trying to point out in the post that, as a very general statement, I feel like sufferers deserve compensation whether or not the suffering was voluntary.