Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere
Really? That’s kind of scary if true. Moral philosophy on LW doesn’t strike me as especially well developed (particularly compared to other rationality related subjects LW covers).
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere, and when I look at the pace of developments in philosophy, it looks like it will take decades for everyone else to catch up. Maybe I’m underestimating the quality of mainstream philosophy, though.
All I know is that people who are interested in moral philosophy who haven’t been exposed to LW are a lot more confused than those on LW. And that those on LW are more confused than they think they are (hence the OP).
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere
What do you think represents the best moral philosophy that LW has to offer?
And that those on LW are more confused than they think they are (hence the OP).
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
What do you think represents the best moral philosophy that LW has to offer?
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
So we are better off just using our current buggy procedural morality.
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
You don’t tend to find much detailed academic discussion regarding metaethical philosophy on the blogosphere at all.
Disclaimers: strictly comparing it to other subjects which I consider similar from an outside view, and supported only by personal experience and observation.
Really? That’s kind of scary if true. Moral philosophy on LW doesn’t strike me as especially well developed (particularly compared to other rationality related subjects LW covers).
I don’t believe anyone’s really taken the metaethics sequence out for a test drive to see if it solves any nontrivial problems in moral philosophy.
Its worse than that. No-one even knows what the theory laid out is. EY says different things in different places.
If I recall correctly it struck me as an ok introduction to metaethics but it stopped before it got to the hard (ie. interesting) stuff.
Moral philosophy is not well developed on LW, but I think it’s further than it is elsewhere, and when I look at the pace of developments in philosophy, it looks like it will take decades for everyone else to catch up. Maybe I’m underestimating the quality of mainstream philosophy, though.
All I know is that people who are interested in moral philosophy who haven’t been exposed to LW are a lot more confused than those on LW. And that those on LW are more confused than they think they are (hence the OP).
What do you think represents the best moral philosophy that LW has to offer?
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
It seems to fit with intuition. How exactly my intuitions are supposed to imply actual morality is an open question.
Could you nominate some confusions that are unsunk amongst professional philosophers (vis a vis your “decades ahead” claim).
You don’t tend to find much detailed academic discussion regarding metaethical philosophy on the blogosphere at all.
Disclaimers: strictly comparing it to other subjects which I consider similar from an outside view, and supported only by personal experience and observation.