What do you think represents the best moral philosophy that LW has to offer?
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Just a few months ago you seemed to be saying that we didn’t need to study moral philosophy, but just try to maximize “awesomeness”, which “You already know that you know how to compute”. I find it confusing that this post doesn’t mention that one at all. Have you changed your mind since then, if so why? Or are you clarifying your position, or something else?
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
So we are better off just using our current buggy procedural morality.
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
The metaethics sequence sinks most of the standard confusions, though it doesn’t offer actual conclusions or procedures.
Complexity of value. Value being human specific. morality as optimization target. etc.
Maybe it’s just the epistemic quality around here though. LWers talking about morality are able to go much further without getting derailed than the best I’ve seen elsewhere, even if there weren’t much good work on moral philosophy on LW.
Right. This is a good question.
For actually making decisions, use Awesomeness or something as your moral proxy, because it more or less just works. For those of us who want to go deeper and understand the theory of morality declaratively, the OP applies; we basically don’t have any good theory. They are two sides of the same coin; the situation in moral philosophy is like the situation in physics a few hundred (progress subjective) years ago, and we need to recognize this before trying to build the house on sand, so to speak. So we are better off just using our current buggy procedural morality.
I could have made the connection clearer I suppose.
This post is actually a sort of precurser to some new and useful (I hope) work on the subject that I’ve written up but haven’t gotten around to polishing and posting. I have maybe 5 posts worth of morality related stuff in the works, and then I’m getting out of this godforsaken dungeon.
Given that we don’t have a good explicit theory of what morality really is, how do you know (and how could you confidently claim in that earlier post) that Awesomeness is a good moral proxy?
I think I understand what you’re saying now, thanks for the clarification. However, my current buggy procedural morality is not “maximize awesomeness” but more like an instinctive version of Bostrom and Ord’s moral parliament.
It seems to fit with intuition. How exactly my intuitions are supposed to imply actual morality is an open question.
Could you nominate some confusions that are unsunk amongst professional philosophers (vis a vis your “decades ahead” claim).