The salient point for LW is orthogonality thesis, not (alternatives to) moral realism. It’s not really a philosophical point, as it’s clearly possible in principle to build AIs that pursue arbitrary objectives (and don’t care about their moral status). A question closer to practical relevance is about the character of goals of the more likely first AGIs, both for initial goals and what they settle on eventually.
I agree with the orthogonality thesis, so no point disagreeing there. I’m not explaining the most widely held lesswrong beliefs—just a few that I strongly disagree with.
One issue with the post is that you didn’t convincingly point to what specifically you disagree with, as something meaningfully present on LW and not only independently described or gestured at in your post. You are making claims about what LW views are, but the claims are too far from being either clear or self-evident (in actually referring to something that’s really from LW) to stand on their own, without enough references/quotes to clarify what’s going on. (It’s an unnecessary issue, you could just describe your points, without framing them as a disagreement. Though to have a chance of meaningful engagement an LW post should be shorter and focused on fewer points.)
So I pointed to a real LW view that seems closest to what you are talking about, even though it’s clearly irrelevant to your post and isn’t what you discuss. I think LW views relevant to your post (those held by multiple people as common knowledge openly communicated here in particular) don’t say anything too surprising or specific, and are additionally confused on proper use of philosophical terms.
I didn’t want the post to be too long. I agree that not everyone on LessWrong agrees with this and exactly how prolific they are is an empirical matter that I have not investigated. However, my sense, having spent a lot of time around such people, is that they’re pretty common.
Utilitarianism is a normative ethical view, not a meta-ethical view. I’m a utilitarian and a realist. One can be a utilitarian and adopt any meta-ethical view.
Utilitarianism is a normative ethical view, not a meta-ethical view
Of course not. It’s a form of consequentialism , so it’s metaethics. But it’s incomplete metaethics...it doesn’t specify realism versus anti realism.but, but it does specify other things.
That’s true, but consequentialism, deontology, etc. are typically categorized as normative ethical theories, while claims like “don’t kill” are treated as first-order normative moral claims.
The term “metaethics” is typically used to refer to abstract issues about the nature of morality, e.g., whether there are moral facts. It is pretty much standard in contemporary moral philosophy to refer to consequentialism as a normative moral theory, not a metaethical one.
I don’t think there are correct or incorrect definitions, but describing consequentialism as a metaethical view is at least unconventional from the standpoint of how these terms are used in contemporary moral philosophy.
The salient point for LW is orthogonality thesis, not (alternatives to) moral realism. It’s not really a philosophical point, as it’s clearly possible in principle to build AIs that pursue arbitrary objectives (and don’t care about their moral status). A question closer to practical relevance is about the character of goals of the more likely first AGIs, both for initial goals and what they settle on eventually.
I agree with the orthogonality thesis, so no point disagreeing there. I’m not explaining the most widely held lesswrong beliefs—just a few that I strongly disagree with.
One issue with the post is that you didn’t convincingly point to what specifically you disagree with, as something meaningfully present on LW and not only independently described or gestured at in your post. You are making claims about what LW views are, but the claims are too far from being either clear or self-evident (in actually referring to something that’s really from LW) to stand on their own, without enough references/quotes to clarify what’s going on. (It’s an unnecessary issue, you could just describe your points, without framing them as a disagreement. Though to have a chance of meaningful engagement an LW post should be shorter and focused on fewer points.)
So I pointed to a real LW view that seems closest to what you are talking about, even though it’s clearly irrelevant to your post and isn’t what you discuss. I think LW views relevant to your post (those held by multiple people as common knowledge openly communicated here in particular) don’t say anything too surprising or specific, and are additionally confused on proper use of philosophical terms.
I didn’t want the post to be too long. I agree that not everyone on LessWrong agrees with this and exactly how prolific they are is an empirical matter that I have not investigated. However, my sense, having spent a lot of time around such people, is that they’re pretty common.
If it turns out the lesswrong is not anti-realistic, the post could have been half the length.
The most popular metaethics on lesswrong appears to be utilitarianism...but it’s unclear whether or not utilitarianism is a form of realism.
I think the crux is more about naturalism. Full strength moral realism , such as Platonism , is often explicitly anti naturalist.
Utilitarianism is a normative ethical view, not a meta-ethical view. I’m a utilitarian and a realist. One can be a utilitarian and adopt any meta-ethical view.
Of course not. It’s a form of consequentialism , so it’s metaethics. But it’s incomplete metaethics...it doesn’t specify realism versus anti realism.but, but it does specify other things.
Can you elaborate? Why is it a metaethical position because it’s a form of consequentialism?
Consequentialism, deontology etc are broad claims about ethics that aren’t object level ethics , like “thou shalt not kill”.
That’s true, but consequentialism, deontology, etc. are typically categorized as normative ethical theories, while claims like “don’t kill” are treated as first-order normative moral claims.
The term “metaethics” is typically used to refer to abstract issues about the nature of morality, e.g., whether there are moral facts. It is pretty much standard in contemporary moral philosophy to refer to consequentialism as a normative moral theory, not a metaethical one.
I don’t think there are correct or incorrect definitions, but describing consequentialism as a metaethical view is at least unconventional from the standpoint of how these terms are used in contemporary moral philosophy.
As omnizoid points out, utilitarianism is not a metaethical position. It is not a form of realism.
Eh. Constructivism, definitely. One should go over to EA forum if you want to find all the utilitarians :P