There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
Try actually applying it to some real life situations and you’ll quickly discover the problems with it.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer’s stuff doesn’t say much about that topic, but that doesn’t mean it fails at it.
I don’t think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
You mean that it’s not something that I could use to write an explicit utility function? Of course.
Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.