I’m assuming a lot of background in this post that you don’t seem to have. Have you read the sequences, specifically the metaethics stuff?
Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere and a lot of the stuff you mentioned is considered solved or obsolete.
Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere
Really? That’s kind of scary if true. Moral philosophy on LW doesn’t strike me as especially well developed (particularly compared to other rationality related subjects LW covers).
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere, and when I look at the pace of developments in philosophy, it looks like it will take decades for everyone else to catch up. Maybe I’m underestimating the quality of mainstream philosophy, though.
All I know is that people who are interested in moral philosophy who haven’t been exposed to LW are a lot more confused than those on LW. And that those on LW are more confused than they think they are (hence the OP).
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere
What do you think represents the best moral philosophy that LW has to offer?
And that those on LW are more confused than they think they are (hence the OP).
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
What do you think represents the best moral philosophy that LW has to offer?
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
So we are better off just using our current buggy procedural morality.
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
You don’t tend to find much detailed academic discussion regarding metaethical philosophy on the blogosphere at all.
Disclaimers: strictly comparing it to other subjects which I consider similar from an outside view, and supported only by personal experience and observation.
Have you read the sequences, specifically the metaethics stuff?
I have, and I found it unclear and inconclusive. A number of people have offered to explain it , and they all ended up bowing out unable to do so
Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere and a lot of the stuff you mentioned is considered solved or obsolete.
Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I’ve gotten, I’m not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the “is-ought” gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer’s objections, this is a fundamental problem for determining what terminal values or utility function we should use—a task you and I are both interested in undertaking.
I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.
There is a lot of diversity of opinions in philosophers and that may be true as a whole of the discipline, there is some good stuff to be found there. I’d recommend staying here for the most part rather than wading through philosophy elsewhere, though.
Also, many moral philosophers may have very different moral sentiments from you and that maybe that makes them seem like idiots more than they actually are. Different moral sentiments as to whether consequentialism rather than just within consequentialism among other things.
I’m assuming a lot of background in this post that you don’t seem to have. Have you read the sequences, specifically the metaethics stuff?
Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere and a lot of the stuff you mentioned is considered solved or obsolete.
Really? That’s kind of scary if true. Moral philosophy on LW doesn’t strike me as especially well developed (particularly compared to other rationality related subjects LW covers).
I don’t believe anyone’s really taken the metaethics sequence out for a test drive to see if it solves any nontrivial problems in moral philosophy.
Its worse than that. No-one even knows what the theory laid out is. EY says different things in different places.
If I recall correctly it struck me as an ok introduction to metaethics but it stopped before it got to the hard (ie. interesting) stuff.
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere, and when I look at the pace of developments in philosophy, it looks like it will take decades for everyone else to catch up. Maybe I’m underestimating the quality of mainstream philosophy, though.
All I know is that people who are interested in moral philosophy who haven’t been exposed to LW are a lot more confused than those on LW. And that those on LW are more confused than they think they are (hence the OP).
What do you think represents the best moral philosophy that LW has to offer?
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
It seems to fit with intuition. How exactly my intuitions are supposed to imply actual morality is an open question.
Could you nominate some confusions that are unsunk amongst professional philosophers (vis a vis your “decades ahead” claim).
You don’t tend to find much detailed academic discussion regarding metaethical philosophy on the blogosphere at all.
Disclaimers: strictly comparing it to other subjects which I consider similar from an outside view, and supported only by personal experience and observation.
I have, and I found it unclear and inconclusive. A number of people have offered to explain it , and they all ended up bowing out unable to do so
I find no evidence for that claim.
Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I’ve gotten, I’m not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the “is-ought” gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer’s objections, this is a fundamental problem for determining what terminal values or utility function we should use—a task you and I are both interested in undertaking.
I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.
There is a lot of diversity of opinions in philosophers and that may be true as a whole of the discipline, there is some good stuff to be found there. I’d recommend staying here for the most part rather than wading through philosophy elsewhere, though.
Also, many moral philosophers may have very different moral sentiments from you and that maybe that makes them seem like idiots more than they actually are. Different moral sentiments as to whether consequentialism rather than just within consequentialism among other things.