I might be mistaken but I got the feeling that there’s not much of a division, the picture I’ve got of LW on meta-ethics is something along the lines of: values exist in peoples heads, those are real, but if there were no people there wouldn’t be any values. Values are to some extent universal, since most people care about similar things, this makes some values behave as if they were objective. If you want to categories—though I don’t know what you would get out of it, it’s a form of nihilism.
An appropriate question when discussing objective and subjective morality is:
What would an objective morality look like? VS a subjective one?
People here seem to share anti-realist sensibilities but then balk at the label and do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.
I’m not saying it can never be reasonable for an anti-realist to do any of those things, but it certainly seems like belief in subjective or non-cognitive morality hasn’t filtered all the way through people’s beliefs.
I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don’t think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.
People here seem to share anti-realist sensibilities but then balk at the label
When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as “there is no objective morality” or “nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.
do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.
In a lot of cases you are absolutely correct, but there are times when I think people on LW try answer “what do I think is right?”, this becomes a question concerning self-knowledge that is e.g. to what degree I’m I aware of what motivates me or can I formulate what I value?
When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as “there is no objective morality” or “nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.
Terms like “moral subjectivism” are often associated with ‘naive undergraduate moral relativism’ and I suspect a lot of people are trying to avoid affiliating with the latter.
When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as “there is no objective morality” or “nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.
So you don’t think everything is permitted?
How do you convey thinking there is no objective truth value to any moral statement and then convey that something is forbidden?
How do you convey thinking there is no objective truth value to any moral statement and then convey that something is forbidden?
Sure, I can. Doing something that is forbidden, results in harsh consequences (that other agents impose), that is the only meaningful definition I can come up with.
Can you come up with any other useful definition?
While reading your response the first time I got a bit annoyed frankly speaking. So I decided to answer it later when I wouldn’t just scream blue!
I might have misinterpreted your meaning, but it seems like you present a straw man of my argument. I was trying to make concepts like forbidden and permitted pay rent—even in a world where there is no objective morality, as well as show that our—at least my—intuition about “forbiddeness” and “permittedness” is derived form the kind of consequences that they result in. It’s not like something is not permitted in a group, but do not have any bad consequences if preformed.
The largest rent I can ever imagine getting from terms which are in wide and common use is to use them to mean the same things everybody else means when using them. To me, it seems coming up with private definitions for public words decreases the value of these words.
I was trying to make concepts like forbidden and permitted pay rent—even in a world where there is no objective morality,
There are many words used to make moral statements. When you declare that no moral statement can be objectively true, then I don’t think it makes sense to redefine all these words so they now get used in some other way. I doubt you will ever convince me to agree to the redefining of words away from their standard definitions because to me that is just a recipe for confusion.
I have no idea what is “straw man” about any of my responses here.
I agree. I can’t figure out clearly enough exactly what Eliezer’s metaethics is, but there definitely seem to be latent anti-realist sympathies floating around.
I just posted a more detailed description of these beliefs (which are mine) here.
If anyone here believes in an objectively existing morality I am interested in dialogue. Right now it seems like a “not even wrong”, muddled idea to me, but I could be wrong or thinking of a strawman.
I might be mistaken but I got the feeling that there’s not much of a division, the picture I’ve got of LW on meta-ethics is something along the lines of: values exist in peoples heads, those are real, but if there were no people there wouldn’t be any values. Values are to some extent universal, since most people care about similar things, this makes some values behave as if they were objective. If you want to categories—though I don’t know what you would get out of it, it’s a form of nihilism.
An appropriate question when discussing objective and subjective morality is:
What would an objective morality look like? VS a subjective one?
People here seem to share anti-realist sensibilities but then balk at the label and do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.
I’m not saying it can never be reasonable for an anti-realist to do any of those things, but it certainly seems like belief in subjective or non-cognitive morality hasn’t filtered all the way through people’s beliefs.
I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don’t think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.
Well that seems like the most dangerous instance of motivated cognition ever.
It seems like an issue that’s important to get right. Is there a test we could run to see whether it’s true?
Yes, but only once. ;)
Did you mean to link to this comment?
Thanks, fixed.
When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as “there is no objective morality” or “nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.
In a lot of cases you are absolutely correct, but there are times when I think people on LW try answer “what do I think is right?”, this becomes a question concerning self-knowledge that is e.g. to what degree I’m I aware of what motivates me or can I formulate what I value?
Terms like “moral subjectivism” are often associated with ‘naive undergraduate moral relativism’ and I suspect a lot of people are trying to avoid affiliating with the latter.
So you don’t think everything is permitted?
How do you convey thinking there is no objective truth value to any moral statement and then convey that something is forbidden?
Sure, I can. Doing something that is forbidden, results in harsh consequences (that other agents impose), that is the only meaningful definition I can come up with. Can you come up with any other useful definition?
I like to stick with other people’s definitions and not come up with my own. Merriam-Webster for example:
Thanks for being my straight man! :)
While reading your response the first time I got a bit annoyed frankly speaking. So I decided to answer it later when I wouldn’t just scream blue!
I might have misinterpreted your meaning, but it seems like you present a straw man of my argument. I was trying to make concepts like forbidden and permitted pay rent—even in a world where there is no objective morality, as well as show that our—at least my—intuition about “forbiddeness” and “permittedness” is derived form the kind of consequences that they result in. It’s not like something is not permitted in a group, but do not have any bad consequences if preformed.
The largest rent I can ever imagine getting from terms which are in wide and common use is to use them to mean the same things everybody else means when using them. To me, it seems coming up with private definitions for public words decreases the value of these words.
There are many words used to make moral statements. When you declare that no moral statement can be objectively true, then I don’t think it makes sense to redefine all these words so they now get used in some other way. I doubt you will ever convince me to agree to the redefining of words away from their standard definitions because to me that is just a recipe for confusion.
I have no idea what is “straw man” about any of my responses here.
A few examples could help me understand what you mean, because right now I don’t have a clue.
I guess the goal is to simplify the mess as much as possible, but not more. To find a smallest set of rules that would generate a similar result.
Well said.
I agree. I can’t figure out clearly enough exactly what Eliezer’s metaethics is, but there definitely seem to be latent anti-realist sympathies floating around.
Agreed.
I just posted a more detailed description of these beliefs (which are mine) here.
If anyone here believes in an objectively existing morality I am interested in dialogue. Right now it seems like a “not even wrong”, muddled idea to me, but I could be wrong or thinking of a strawman.