Great post! I agree with your analysis of moral semantics.
However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no. Why would we even think that this is the case? One conclusion we can draw from this post is that telling an unfriendly AI that what it’s doing is “wrong” won’t affect its behavior. Because that which is “wrong” might be exactly that which is “moreclippy”! I feel that Eliezer probably agrees with me, here, since I gained I lot of insight into the issue from reading Three Worlds Collide.
Asking why we value that which is “right” is a scientific question, with a scientific answer. Our values are what they are, now, though, so, minus the semantics, doesn’t morality just reduce to decision theory?
However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire?
Thanks for bringing up that point! You mentioned below your appreciation for desirism, which says inter alia that there are no intrinsic values independent of what agents desire. Nevertheless, I think there is another way of looking at it under desirism that is almost like saying that there are intrinsic values.
Pose the question this way: If I could choose my desires in whole or in part, what set of desires would I be most satisfied with? In general, an agent will be more satisfied with a larger number of satisfiable desires and a smaller number of unsatisfiable desires. Then the usual criteria of desirism apply as a filter.
To the very limited extent that I can modify my desires, I take that near-tautology to mean that, independently from what I currently desire, I should change my mind and enjoy and desire things I never used to, like professional sports, crime novels, and fashion, for popular examples. It would also mean that I should enjoy and desire a broad variety of music and food, and generally be highly curious. And it would mean I should reduce my desires for social status, perfect health as I age, and resolution of difficult philosophical problems.
I disagree. The downsides greatly outweigh the upsides from my perspective.
I’m skeptical that the behaviors people engage in to eke out a little more social status among people they don’t value are anything more than resources wasted with high opportunity cost.
And, at 30 years of age, I’m already starting to notice that recovery from minor injuries and illnesses takes longer than it used to—if I kept expecting and desiring perfect health, I’d get only disappointment from here on out. As much as I can choose it, I’ll choose to desire only a standard of health that is realistically achievable.
I haven’t read through it, yet, so I may be completely incorrect, but according to my understanding of Coherent Extrapolated Volition, moral progress as defined there is equivalent (or fairly similar to) to the world becoming “better” as defined by desirism (desires which promote the fulfillment of other desires become promoted).
do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no
You just jumped to the conclusion that there is no epistemically objective morality—nothing you objectively-should do --
because there in no metaphysically objective morality, no Form of the Good. That is a fallcy (although a common one on LW). EY has in fact explained how morality can be epistemically objective: it can be based on logic.
According to Eliezer’s definition of “should” in this post, I “should” do things which lead to “life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience...” But unless I already cared about those things, I don’t see why I would do what I “should” do, so as a universal prescription for action, this definition of “morality” fails.
“And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.”. If you do care about reason, you can therefore be reasoned into morality.
In any case, it is no argument against moral objectivism/realism that some people don;’t “get” it. Maths sets up
universal truths, which can be recognised by those capable of recognising them. That some don;t recognise them
doesn;t stop them being objective.
You can spend your energy on condemnation if you wish. It doesn’t sound like the most efficient use of my time. It is highly unlikely that political activism (which is what condemnation is about, either implicitly or explicitly) against any particular evil is the optimal way for me to do ‘good’.
“Anyone can be reasoned into doing that which would fulfill the most and strongest of current desires. However, what fulfills current desires is not necessarily the same thing as what is right.”
You seem to be overlooking the desire to be (seen to be) reasonable in itself.
“Anyone can be reasoned into doing what is right with enough argumentation”
...is probably false. But if reasoning and condemnation both modify bechaviour, however imperfectly, why not
use both?
Great post! I agree with your analysis of moral semantics.
However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no. Why would we even think that this is the case? One conclusion we can draw from this post is that telling an unfriendly AI that what it’s doing is “wrong” won’t affect its behavior. Because that which is “wrong” might be exactly that which is “moreclippy”! I feel that Eliezer probably agrees with me, here, since I gained I lot of insight into the issue from reading Three Worlds Collide.
Asking why we value that which is “right” is a scientific question, with a scientific answer. Our values are what they are, now, though, so, minus the semantics, doesn’t morality just reduce to decision theory?
Thanks for bringing up that point! You mentioned below your appreciation for desirism, which says inter alia that there are no intrinsic values independent of what agents desire. Nevertheless, I think there is another way of looking at it under desirism that is almost like saying that there are intrinsic values.
Pose the question this way: If I could choose my desires in whole or in part, what set of desires would I be most satisfied with? In general, an agent will be more satisfied with a larger number of satisfiable desires and a smaller number of unsatisfiable desires. Then the usual criteria of desirism apply as a filter.
To the very limited extent that I can modify my desires, I take that near-tautology to mean that, independently from what I currently desire, I should change my mind and enjoy and desire things I never used to, like professional sports, crime novels, and fashion, for popular examples. It would also mean that I should enjoy and desire a broad variety of music and food, and generally be highly curious. And it would mean I should reduce my desires for social status, perfect health as I age, and resolution of difficult philosophical problems.
Considering the extent to which those two can help with other objectives, I’d say you should be very careful about giving up on them.
I disagree. The downsides greatly outweigh the upsides from my perspective.
I’m skeptical that the behaviors people engage in to eke out a little more social status among people they don’t value are anything more than resources wasted with high opportunity cost.
And, at 30 years of age, I’m already starting to notice that recovery from minor injuries and illnesses takes longer than it used to—if I kept expecting and desiring perfect health, I’d get only disappointment from here on out. As much as I can choose it, I’ll choose to desire only a standard of health that is realistically achievable.
I haven’t read through it, yet, so I may be completely incorrect, but according to my understanding of Coherent Extrapolated Volition, moral progress as defined there is equivalent (or fairly similar to) to the world becoming “better” as defined by desirism (desires which promote the fulfillment of other desires become promoted).
You just jumped to the conclusion that there is no epistemically objective morality—nothing you objectively-should do -- because there in no metaphysically objective morality, no Form of the Good. That is a fallcy (although a common one on LW). EY has in fact explained how morality can be epistemically objective: it can be based on logic.
I didn’t say that. Of course there is something you should do, given a set of goals...hence decision theory.
There is something you self centerdly should do, but that doens’t mean there is nothing you morally-should do either.
According to Eliezer’s definition of “should” in this post, I “should” do things which lead to “life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience...” But unless I already cared about those things, I don’t see why I would do what I “should” do, so as a universal prescription for action, this definition of “morality” fails.
Correct. Agents who don’t care about morality generally can’t be convinced to do what they morally should do.
He also said:
“And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities.”. If you do care about reason, you can therefore be reasoned into morality.
In any case, it is no argument against moral objectivism/realism that some people don;’t “get” it. Maths sets up universal truths, which can be recognised by those capable of recognising them. That some don;t recognise them doesn;t stop them being objective.
You do not reason with evil. You condemn it.
I subscribe to desirism. So I’m not a strict anti-realist.
You can spend your energy on condemnation if you wish. It doesn’t sound like the most efficient use of my time. It is highly unlikely that political activism (which is what condemnation is about, either implicitly or explicitly) against any particular evil is the optimal way for me to do ‘good’.
“Anyone can be reasoned into doing that which would fulfill the most and strongest of current desires. However, what fulfills current desires is not necessarily the same thing as what is right.”
You seem to be overlooking the desire to be (seen to be) reasonable in itself.
“Anyone can be reasoned into doing what is right with enough argumentation”
...is probably false. But if reasoning and condemnation both modify bechaviour, however imperfectly, why not use both?
How does that differ from virtue ethics?