Agreed, but what I’m mostly griping about is when people who know that utility functions are a really inaccurate model still go ahead and use it, even if prefaced by some number of standard caveats. “Goal system”, for example, conveys a similar abstract idea without all of the questionable and misleading technical baggage (let alone associations with “utilitarianism”), and is more amenable to case-specific caveats. I don’t think we should downvote people for talking about utility functions, especially if they’re newcomers, but there’s a point at which we have to adopt generally higher standards for which concepts we give low K complexity in our language.
I have a vested interested in this. All of the most interesting meta-ethics and related decision theory I’ve seen thus far has come from people associated with SingInst or Less Wrong. If we are to continue to be a gathering place for that kind of mind we can’t let our standards degenerate, and ideally we should be aiming for improvement. From far away it would be way easy to dismiss Less Wrong as full of naive nerds completely ignorant of both philosophy and psychology. From up close it would be easy to dismiss Less Wrong as overly confident in a suspiciously homogeneous set of philosophically questionable meta-ethical beliefs, e.g. some form of utilitarianism. The effects of such appearances are hard to calculate and I think larger than most might intuit. (The extent to which well-meaning folk of an ideology very influenced by Kurzweil have poisoned the well for epistemic-hygienic or technical discussion of technological singularity scenarios, for instance, seems both very large and very saddening.)
From up close it would be easy to dismiss Less Wrong as overly confident in a suspiciously homogeneous set of philosophically questionable meta-ethical beliefs, e.g. utilitarianism.
What is giving this appearance? We have plenty of vocal commenters who are against utilitarianism, top-level posts pointing out problems in utilitarianism, and very few people actually defending utilitarianism. I really don’t get it. (BTW, utilitarianism is usually considered normative ethics, not metaethics.)
Also, utility function != utilitarianism. The fact that some people get confused about this is not a particularly good (additional) reason to stop talking about utility functions.
Here is someone just in this thread who apparently confuses EU-maxing with utilitarianism and apparently thinks that Less Wrong generally advocates utilitarianism. I’ll ask XiXiDu what gave him these impressions, that might tell us something.
ETA: The following comment is outdated. I had a gchat conversation with Wei Dai in which he kindly pointed out some ways in which my intended message could easily and justifiably have interpreted as a much stronger claim. I’ll add a note to my top level comment warning about this.
Also, utility function != utilitarianism. The fact that some people get confused about this is not a particularly good (additional) reason to stop talking about utility functions.
I never proposed that people stop talking about utility functions, and twice now I’ve described the phenomenon that I’m actually complaining about. Are you trying to address some deeper point you think is implicit in my argument, are you predicting how other people will interpret my argument and arguing against that interpreted version, or what? I may be wrong, but I think it is vitally important for epistemic hygiene that we at least listen to and ideally respond to what others are actually saying. You’re an excellent thinker and seemingly less prone to social biases than most so I am confused by your responses. Am I being dense somehow?
(ETA: The following hypothesis is obviously absurd. Blame it on rationalization. It’s very rare I get to catch myself so explicitly in the act! w00t!) Anyway, the people I have in mind don’t get confused about the difference between reasoning about/with utility functions and being utilitarian, they just take the former as strong evidence as of the latter. This doesn’t happen when “utility function” is used technically or in a sand-boxed way, only when it is used in the specific way that I was objecting to. Notice how I said we should be careful about which concepts we use, not which words.
I don’t really get it either. It seems that standard Less Wrong moral philosophy can be seen at some level of abstraction as a divergence from utilitarianism, e.g. because of apparently widespread consequentialism and focus on decision theory. But yeah, you’d think the many disavowments of utilitarianism would have done more to dispel the notion. Does your impression agree with mine though that it seems that many people think Less Wrong is largely utilitarian?
(BTW, utilitarianism is usually considered normative ethics, not metaethics.)
I desperately want a word that covers the space I want to cover that doesn’t pattern match to incorrect/fuzzy thing. (E.g. I think it is important to remember that one’s standard moral beliefs can have an interesting implicit structure at the ethical/metaethical levels, vice versa, et cetera.) Sometimes I use “shouldness” or “morality” but those are either misleading or awkward depending on context. Are there obvious alternatives I’m missing? I used “moral philosophy” above but I’m pretty sure that’s also straight-up incorrect. Epistemology of morality is clunky and probably means something else.
Agreed, but what I’m mostly griping about is when people who know that utility functions are a really inaccurate model still go ahead and use it, even if prefaced by some number of standard caveats. “Goal system”, for example, conveys a similar abstract idea without all of the questionable and misleading technical baggage (let alone associations with “utilitarianism”), and is more amenable to case-specific caveats. I don’t think we should downvote people for talking about utility functions, especially if they’re newcomers, but there’s a point at which we have to adopt generally higher standards for which concepts we give low K complexity in our language.
I have a vested interested in this. All of the most interesting meta-ethics and related decision theory I’ve seen thus far has come from people associated with SingInst or Less Wrong. If we are to continue to be a gathering place for that kind of mind we can’t let our standards degenerate, and ideally we should be aiming for improvement. From far away it would be way easy to dismiss Less Wrong as full of naive nerds completely ignorant of both philosophy and psychology. From up close it would be easy to dismiss Less Wrong as overly confident in a suspiciously homogeneous set of philosophically questionable meta-ethical beliefs, e.g. some form of utilitarianism. The effects of such appearances are hard to calculate and I think larger than most might intuit. (The extent to which well-meaning folk of an ideology very influenced by Kurzweil have poisoned the well for epistemic-hygienic or technical discussion of technological singularity scenarios, for instance, seems both very large and very saddening.)
What is giving this appearance? We have plenty of vocal commenters who are against utilitarianism, top-level posts pointing out problems in utilitarianism, and very few people actually defending utilitarianism. I really don’t get it. (BTW, utilitarianism is usually considered normative ethics, not metaethics.)
Also, utility function != utilitarianism. The fact that some people get confused about this is not a particularly good (additional) reason to stop talking about utility functions.
Here is someone just in this thread who apparently confuses EU-maxing with utilitarianism and apparently thinks that Less Wrong generally advocates utilitarianism. I’ll ask XiXiDu what gave him these impressions, that might tell us something.
ETA: The following comment is outdated. I had a gchat conversation with Wei Dai in which he kindly pointed out some ways in which my intended message could easily and justifiably have interpreted as a much stronger claim. I’ll add a note to my top level comment warning about this.
I never proposed that people stop talking about utility functions, and twice now I’ve described the phenomenon that I’m actually complaining about. Are you trying to address some deeper point you think is implicit in my argument, are you predicting how other people will interpret my argument and arguing against that interpreted version, or what? I may be wrong, but I think it is vitally important for epistemic hygiene that we at least listen to and ideally respond to what others are actually saying. You’re an excellent thinker and seemingly less prone to social biases than most so I am confused by your responses. Am I being dense somehow?
(ETA: The following hypothesis is obviously absurd. Blame it on rationalization. It’s very rare I get to catch myself so explicitly in the act! w00t!) Anyway, the people I have in mind don’t get confused about the difference between reasoning about/with utility functions and being utilitarian, they just take the former as strong evidence as of the latter. This doesn’t happen when “utility function” is used technically or in a sand-boxed way, only when it is used in the specific way that I was objecting to. Notice how I said we should be careful about which concepts we use, not which words.
I don’t really get it either. It seems that standard Less Wrong moral philosophy can be seen at some level of abstraction as a divergence from utilitarianism, e.g. because of apparently widespread consequentialism and focus on decision theory. But yeah, you’d think the many disavowments of utilitarianism would have done more to dispel the notion. Does your impression agree with mine though that it seems that many people think Less Wrong is largely utilitarian?
I desperately want a word that covers the space I want to cover that doesn’t pattern match to incorrect/fuzzy thing. (E.g. I think it is important to remember that one’s standard moral beliefs can have an interesting implicit structure at the ethical/metaethical levels, vice versa, et cetera.) Sometimes I use “shouldness” or “morality” but those are either misleading or awkward depending on context. Are there obvious alternatives I’m missing? I used “moral philosophy” above but I’m pretty sure that’s also straight-up incorrect. Epistemology of morality is clunky and probably means something else.