Is utilitarianism foundational to LessWrong? Asking because for a while I’ve been toying with the idea of writing a few posts with morality as a theme, from the standpoint of, broadly, virtue ethics—with some pragmatic and descriptive ethics thrown in. (The themes are quite generous and interlocking, and to be honest I don’t know where to start or whether I’ll finish it.) This perspective treats stable character traits, with their associated emotions, drives, and motives as the most reasonably likely determiner of moral behaviour, and means to encourage people to “build character” so as to become more moral beings or improve their behaviour. It doesn’t concern itself with quantitative approaches to welfare. Frankly, I find it hard to take seriously the numerical applications of utilitarianism, and my brain just shuts down upon some ethical problems usually enjoyed around here (torture vs. dust specks, repugnant conclusion, contrived deals with strange gods and so on).
I know that Eliezer’s virtues-of-rationality post is widely appreciated by many people around here, but it’s a declaration of (commitment to) values more than anything. It never seemed to be the dominant paradigm. I guess I just want to know whether a virtue-ethical approach would be well-received here, and the extent to which a utilitarian and a virtue ethicist can usefully discuss morality without jumping a meta level into theories of normative ethics.
In general I don’t think there are foundational ideas on LW that shouldn’t be questioned. Any idea is up for investigation provided the case is well argued.
In general I don’t think there are foundational ideas on LW that shouldn’t be questioned. Any idea is up for investigation provided the case is well argued.
But there are certain ideas that will be downvoted and dismissed because people feel like they aren’t useful to be talking about, like if God exists. I think OP was asking if it was a topic that fell under this category.
But there are certain ideas that will be downvoted and dismissed because people feel like they aren’t useful to be talking about, like if God exists.
The problem with “does God exist” isn’t about the fact that LW is atheist. It’s that it’s hard to say interesting things about the subject and provide a well argued case.
I don’t expect to learn something new when I read another post about whether or not God exists. If someone knows the subject well enough to tell me something new, then there no problem with them writing a post to communicate that insight.
I endorse discussion of virtue ethics on LW mostly because I haven’t seen many arguments for why I should use it or discussions of how using it works. I’ve seen a lot of pro-utilitarianism and “how to do things with utilitarianism” pieces and a lot of discussion of deontology in the form of credible precommitments and also as heuristics and rule utilitarianism, but I haven’t really seen a virtue ethics piece that remotely approaches Yvain’s Consequentialism FAQ in terms of readability and usability.
When you say virtue ethics, it sounds like you are describing consequentialism implemented on human software.
If we’re talking about the philosopher’s virtue ethics, this question should clarify: Are virtues virtuous because they lead to moral behavior? Or is behavior moral because it cultivates virtue?
The first is just applied consequentialism. The second is the philosopher’s virtue ethics.
The thing is… that’s really beyond the scope of what I care to argue about. I understand the difference, but it’s so small as to not be worth the typing time. It’s precisely the kind of splitting hairs I don’t want to go into.
The theme that would get treated is morality, not ethics. It kind of starts off assuming that it is self-evident why good is good, and that human beings do not hold wildly divergent morals or have wildly different internal states in the same situation. Mostly. Sample topics that I’m likely to touch on are: rationality as wisdom; the self-perception of a humble person and how that may be an improvement from the baseline; the intent with which one enters an interaction; a call towards being more understanding to others; respect and disrespect; how to deflect (and why to avoid making) arguments in bad faith; malicious dispositions, and more. Lots of things relevant to community maintenance.
These essays aren’t yet written, so perhaps that’s why it all sounds (and is) so chaotic. There may be more topics which conflict more obviously with utilitarianism, especially if there’s a large number of individuals concerned. As for conflicts with consequentialism, they’re less likely, but still probable.
If you don’t want to talk about the difference then I respect that, and I wasn’t suggesting that you do. If anything I would suggest avoiding the term “virtue ethics” entirely and instead talking about virtue which is more general and a component of most moral systems.
I disagree that it is splitting hairs though or a small difference. It makes a large whether or not you wish to cultivate virtue for its own sake (regardless or independent of consequence), or because it helps you achieve other goals. The latter makes fewer assumptions about the goals of your reader.
Consequentialism, where morality is viewed through a lens of what happens due to human actions, is a major part of LessWrong. Utilitarianism specifically, where you judge an act by the results, is a subset of consequentialism and not nearly as widely accepted. Virtue Ethics are generally well liked and it’s often said around here that “Consequentialism is what’s right, Virtue Ethics are what works.” I think that practical guide to virtue ethics would be well received.
No. Individual utility calculations are, as a component of decision theory, but decision-theoretic utility and interpersonal-comparison utility are different things with different assumptions.
encourage people to “build character” so as to become more moral beings or improve their behaviour.
This is a solid view, and one of the main ones I take—but I observe that listing out goals and developing training regimens have different purposes and uses.
Okay, ancient enough, but fell into disuse around the Enlightenment, was hardly considered 100-120 years ago, returned among academic philosophers like Philippa Foot, Catholics like MacIntyre tryed to keep it alive, and it is only roughly about now that it is something slowly considered again by the hip young atheist literati classes for whom karma is merely a metaphor and do not literally believe in bad deeds putting a stain on the soul, so in this sense it is only a newly fashionable thing again.
Given that I know somebody is a virtue ethicist, I place a prior probability of 20% that they are bisexual, and a prior probability of 40% that they are some variant of highly functional sociopath.
That’s adjusted for overconfidence. I -want- to assign 60% to bisexuality and 80% to sociopath.
Your beliefs imply likelihood ratios of ~10 and ~70 for bisexuality and sociopathy respectively (assuming base rates of 2-3% and 1%, respectively). What do you think you know and how do you think you know it?
The two variables aren’t distinct; bisexuality in this case is a “symptom” of a particular kind of sociopathy. (The reason the odds aren’t much closer, however, is that “adaptive sociopathy” has been buried under garbage on the internet since Hannibal made sociopathy “cool”, and I’m unable to relocate -any- sources, definitive or otherwise, on the subject since my last research. I may have to resort to textbooks.)
Adaptive sociopaths would find virtue ethics trivial to implement, as it is characterized, effectively, by extremely effective emulation of others. It’s a brand of ethics which is particularly well suited to them.
Is utilitarianism foundational to LessWrong? Asking because for a while I’ve been toying with the idea of writing a few posts with morality as a theme, from the standpoint of, broadly, virtue ethics—with some pragmatic and descriptive ethics thrown in. (The themes are quite generous and interlocking, and to be honest I don’t know where to start or whether I’ll finish it.) This perspective treats stable character traits, with their associated emotions, drives, and motives as the most reasonably likely determiner of moral behaviour, and means to encourage people to “build character” so as to become more moral beings or improve their behaviour. It doesn’t concern itself with quantitative approaches to welfare. Frankly, I find it hard to take seriously the numerical applications of utilitarianism, and my brain just shuts down upon some ethical problems usually enjoyed around here (torture vs. dust specks, repugnant conclusion, contrived deals with strange gods and so on).
I know that Eliezer’s virtues-of-rationality post is widely appreciated by many people around here, but it’s a declaration of (commitment to) values more than anything. It never seemed to be the dominant paradigm. I guess I just want to know whether a virtue-ethical approach would be well-received here, and the extent to which a utilitarian and a virtue ethicist can usefully discuss morality without jumping a meta level into theories of normative ethics.
If it helps you, the 2014 census gave for moral beliefs:
Moral Views Accept/lean towards consequentialism: 901, 60.0% Accept/lean towards deontology: 50, 3.3% Accept/lean towards natural law: 48, 3.2% Accept/lean towards virtue ethics: 150, 10.0% Accept/lean towards contractualism: 79, 5.3% Other/no answer: 239, 15.9%
Meta-ethics Constructivism: 474, 31.5% Error theory: 60, 4.0% Non-cognitivism: 129, 8.6% Subjectivism: 324, 21.6% Substantive realism: 209, 13.9%
In general I don’t think there are foundational ideas on LW that shouldn’t be questioned. Any idea is up for investigation provided the case is well argued.
But there are certain ideas that will be downvoted and dismissed because people feel like they aren’t useful to be talking about, like if God exists. I think OP was asking if it was a topic that fell under this category.
The problem with “does God exist” isn’t about the fact that LW is atheist. It’s that it’s hard to say interesting things about the subject and provide a well argued case.
I don’t expect to learn something new when I read another post about whether or not God exists. If someone knows the subject well enough to tell me something new, then there no problem with them writing a post to communicate that insight.
I endorse discussion of virtue ethics on LW mostly because I haven’t seen many arguments for why I should use it or discussions of how using it works. I’ve seen a lot of pro-utilitarianism and “how to do things with utilitarianism” pieces and a lot of discussion of deontology in the form of credible precommitments and also as heuristics and rule utilitarianism, but I haven’t really seen a virtue ethics piece that remotely approaches Yvain’s Consequentialism FAQ in terms of readability and usability.
When you say virtue ethics, it sounds like you are describing consequentialism implemented on human software.
If we’re talking about the philosopher’s virtue ethics, this question should clarify: Are virtues virtuous because they lead to moral behavior? Or is behavior moral because it cultivates virtue?
The first is just applied consequentialism. The second is the philosopher’s virtue ethics.
The thing is… that’s really beyond the scope of what I care to argue about. I understand the difference, but it’s so small as to not be worth the typing time. It’s precisely the kind of splitting hairs I don’t want to go into.
The theme that would get treated is morality, not ethics. It kind of starts off assuming that it is self-evident why good is good, and that human beings do not hold wildly divergent morals or have wildly different internal states in the same situation. Mostly. Sample topics that I’m likely to touch on are: rationality as wisdom; the self-perception of a humble person and how that may be an improvement from the baseline; the intent with which one enters an interaction; a call towards being more understanding to others; respect and disrespect; how to deflect (and why to avoid making) arguments in bad faith; malicious dispositions, and more. Lots of things relevant to community maintenance.
These essays aren’t yet written, so perhaps that’s why it all sounds (and is) so chaotic. There may be more topics which conflict more obviously with utilitarianism, especially if there’s a large number of individuals concerned. As for conflicts with consequentialism, they’re less likely, but still probable.
If you don’t want to talk about the difference then I respect that, and I wasn’t suggesting that you do. If anything I would suggest avoiding the term “virtue ethics” entirely and instead talking about virtue which is more general and a component of most moral systems.
I disagree that it is splitting hairs though or a small difference. It makes a large whether or not you wish to cultivate virtue for its own sake (regardless or independent of consequence), or because it helps you achieve other goals. The latter makes fewer assumptions about the goals of your reader.
Consequentialism, where morality is viewed through a lens of what happens due to human actions, is a major part of LessWrong. Utilitarianism specifically, where you judge an act by the results, is a subset of consequentialism and not nearly as widely accepted. Virtue Ethics are generally well liked and it’s often said around here that “Consequentialism is what’s right, Virtue Ethics are what works.” I think that practical guide to virtue ethics would be well received.
No. Individual utility calculations are, as a component of decision theory, but decision-theoretic utility and interpersonal-comparison utility are different things with different assumptions.
This is a solid view, and one of the main ones I take—but I observe that listing out goals and developing training regimens have different purposes and uses.
I think virtue ethics is sufficiently edgy, new, different these days to be interesting. Go on.
I agree, scholarship is a problem.
Okay, ancient enough, but fell into disuse around the Enlightenment, was hardly considered 100-120 years ago, returned among academic philosophers like Philippa Foot, Catholics like MacIntyre tryed to keep it alive, and it is only roughly about now that it is something slowly considered again by the hip young atheist literati classes for whom karma is merely a metaphor and do not literally believe in bad deeds putting a stain on the soul, so in this sense it is only a newly fashionable thing again.
Again I recommend a poll:
Is utilitarianism foundational to LessWrong? (use the middle option to see results only) [pollid:964]
Also, have you read this post? The virtue tag only points at it and one other, but searching will likely find more.
Given that I know somebody is a virtue ethicist, I place a prior probability of 20% that they are bisexual, and a prior probability of 40% that they are some variant of highly functional sociopath.
That’s adjusted for overconfidence. I -want- to assign 60% to bisexuality and 80% to sociopath.
Your beliefs imply likelihood ratios of ~10 and ~70 for bisexuality and sociopathy respectively (assuming base rates of 2-3% and 1%, respectively). What do you think you know and how do you think you know it?
The two variables aren’t distinct; bisexuality in this case is a “symptom” of a particular kind of sociopathy. (The reason the odds aren’t much closer, however, is that “adaptive sociopathy” has been buried under garbage on the internet since Hannibal made sociopathy “cool”, and I’m unable to relocate -any- sources, definitive or otherwise, on the subject since my last research. I may have to resort to textbooks.)
Adaptive sociopaths would find virtue ethics trivial to implement, as it is characterized, effectively, by extremely effective emulation of others. It’s a brand of ethics which is particularly well suited to them.