Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn’t seem like it’s just some wierd opinion of Eliezer’s.
After I read it I was like, “Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it’s OK that it’s not supernatural or ‘objective’, and we don’t have to ‘justify’ it to an ideal philosophy student of perfect emptyness”. Fake utility functions, and Recursive justification stuff helped.
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I’ll put on my project list “reread the metaethics sequence and create an intelligent reply.” If that happens, it’ll be at least two months out.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague “this isn’t enough” feeling?
I can’t fully remember—it’s been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A ‘preferences are subjectively objective’ post. A post that explains more completely what he means by ‘should’ (he has discussed and argued about this in comments).
It’s much worse than that. Nobody on LW seems to be able to understand it at all.
Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains.
Metaethics sequence is a bit of a mess, but the point it made is important, and it doesn’t seem like it’s just some wierd opinion of Eliezer’s.
After I read it I was like, “Oh, ok. Morality is easy. Just do the right thing. Where ‘right’ is some incredibly complex set of preferences that are only represented implicitly in physical human brains. And it’s OK that it’s not supernatural or ‘objective’, and we don’t have to ‘justify’ it to an ideal philosophy student of perfect emptyness”. Fake utility functions, and Recursive justification stuff helped.
Maybe there’s something wrong with Eliezer’s metaethics, but I havn’t seen anyone point it out, and have no reason to suspect it. Most of the material that contradicts it is obvious mistakes from just not having read and understood the sequences, not an enlightened counter-analysis.
Hm. I think I’ll put on my project list “reread the metaethics sequence and create an intelligent reply.” If that happens, it’ll be at least two months out.
I look forward to that.
Has it ever been demonstrated that there is a consensus on what point he was trying to make, and that he in fact demonstrated it?
He seems to make a conclusion, but I don’t believe demonstrated it, and I never got the sense that he carried the day in the peanut gallery.
Try actually applying it to some real life situations and you’ll quickly discover the problems with it.
There’s a difference between a metaethics and an ethical theory.
The metaethics sequence is supposed to help dissolve the false dichotomy “either there’s a metaphysical, human-independent Source Of Morality, or else the nihilists/moral relativists are right”. It’s not immediately supposed to solve “So, should we push a fat man off the bridge to stop a runaway trolley before it runs over five people?”
For the second question, we’d want to add an Ethics Sequence (in my opinion, Yvain’s Consquentialism FAQ lays some good groundwork for one).
such as?
Well, for starters determining whether something is a preference or a bias is rather arbitrary in practice.
I struggled with that myself, but then figured out a rather nice quantitative solution.
Eliezer’s stuff doesn’t say much about that topic, but that doesn’t mean it fails at it.
I don’t think your solution actually resolves things since you still need to figure out what weights to assign to each of your biases/values.
You mean that it’s not something that I could use to write an explicit utility function? Of course.
Beyond that, whatever weight all my various concerns have is handled by built-in algorithms. I just have to do the right thing.
The main problem I have is that it is grossly incomplete. There are a few foundational posts but it cuts off without covering what I would like to be covered.
What would you like covered? Or is it just that vague “this isn’t enough” feeling?
I can’t fully remember—it’s been a while since I considered the topic so I mostly have the cached conclusion. More on preference aggregation is one thing. A ‘preferences are subjectively objective’ post. A post that explains more completely what he means by ‘should’ (he has discussed and argued about this in comments).
It’s much worse than that. Nobody on LW seems to be able to understand it at all.
Nah. Subjectivism. Euthyphro.