(1) Whatever moral facts exist, they must be part of the natural world. (Moral naturalism.)
In a manner of speaking, yes. Moral facts are facts about the output of a particular computation under particular conditions, so they are “part of the natural world” essentially to whatever extent you’d say the same thing about mathematical deductions. (See Math is Subjunctively Objective, Morality as Fixed Computation, and Abstracted Idealized Dynamics.)
(2) Moral facts are not written into the “book” of the universe—values must be derived from a consideration of preferences. (In philosophical parlance, this would be something like the claim that “The only sources of normativity are relations between preferences and states of affairs.”)
No. Caring about people’s preferences is part of morality, and an important part, I think, but it is not the entirety of morality, or the source of morality. (I’m not sure what a “source of normativity” is; does that refer to the causal history behind someone being moved by a moral argument, or something else?)
(The “Moral facts are not written into the ‘book’ of the universe” bit is correct.)
(3) What I “should” do is determined by what actions would best fulfill my preferences. (This is just a shorter way of saying that I “should” do “what I would do to satisfy my terminal values if I had correct and complete knowledge of what actions would satisfy my terminal values.”)
See Inseparably Right and No License To Be Human. “Should” is not defined by your terminal values or preferences; although human minds (and things causally entangled with human minds) are the only places we can expect to find information about morality, morality is not defined by being found in human minds. It’s the other way around: you happen to care about(/prefer/terminally value) being moral. If we defined “should” such that an agent “should” do whatever satisfies its terminal values (such that pebblesorters should sort pebbles into prime heaps, etc.), then morality would be a Type 2 calculator; it would have no content, it could say anything and still be correct about the question it’s being asked. I suppose you could define “should” that way, but it’s not an adequate unpacking of what humans are actually thinking about when they talk about morality.
Concerning preferences, what else is part of morality besides preferences?
A “source of normativity” is just anything that can justify a should or ought statement. The uncontroversial example is that goals/desires/preferences can justify hypothetical ought statements (hypothetical imperatives). So Eliezer is on solid footing there.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
My own position is that only goals/desires/preferences provide normativity, because the other proposed sources of normativity either don’t provide normativity or don’t exist. But if Eliezer thinks that something besides goals/desires/preferences can provide normativity, I’d like to know what that is.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
Concerning preferences, what else is part of morality besides preferences?
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
I don’t think introducing categories like this is helpful. There are moral arguments that move you, and a framework that responds to the right moral arguments which we term “morality”, things that should move you. The arguments are allowed to be anything (before you test them with the framework), and real humans clearly fail to be ideal implementations of the framework.
(Here, the focus is on acceptance/rejection of moral arguments; decision theory would have you generate these yourself in the way they should be considered, or even self-improve these concepts out of the system if that will make it better.)
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
Ah, have you not actually read through the whole sequence yet? I don’t recommend reading it out of order, and I do recommend reading the whole thing. Mainly because some people in this thread (and elsewhere) are giving completely wrong summaries of it, so you would probably get a much clearer picture of it from the original source.
I’ve read the series all the way through, twice, but large parts of it didn’t make sense to me. By reading the linked post again, I’m hoping to combine what you’ve said with what it says and come to some understanding.
Concerning preferences, what else is part of morality besides preferences?
“Inseparably Right” discusses that a bit, though again, I don’t recommend reading it out of order.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
These stand out to me as wrong questions. I think the sequence mostly succeeded at dissolving them for me; “Invisible Frameworks” is probably the most focused discussion of that.
I do take some confort in the fact that at least at this point, even pros like Robin Hanson and Toby Ord couldn’t make sense of what Eliezer was arguing, even after several rounds of back-and-forth between them.
I read your last paragraph 5 times now and still can’t make sense of it.
One should drink water if one wants satisfy one’s thirst. Here should is loosely used to mean that it is the optimal instrumental action to reach one’s terminal goal. One should not kill is however a psychological projection of one’s utility function. Here should means that one doesn’t want others to engage in killing. The term should is ambiguous and vague, that’s all there is to it, that’s the whole problem.
I suppose you could define “should” that way, but it’s not an adequate unpacking of what humans actually talk about when they talk about morality.
Agreed 100% with this.
Of course, it doesn’t follow that what humans talk about when we talk about morality has the properties we talk about it having, or even that it exists at all, any more than analogous things follow about what humans talk about when we talk about Santa Claus or YHWH.
you happen to care about(/prefer/terminally value) being moral.
To say that “I happen to care about being moral” implies that it could be some other way… that I might have happened to care about something other than being moral.
That is, it implies that instead of caring about “the life of [my] friends and [my] family and [my] Significant Other and [my]self” and etc. and etc. and etc., the superposition of which is morality (according to EY), I might have cared about… well, I don’t know, really. This account of morality is sufficiently unbounded that it’s unclear what it excludes that’s within the range of potential human values at all.
I mean, sure, it excludes sorting pebbles into prime-numbered heaps, for example. But for me to say “instead of caring about morality, I might have cared about sorting pebbles into prime-numbered heaps” is kind of misleading, since the truth is I was never going to care about it; it isn’t the sort of thing people care about. People aren’t Pebblesorters (at least, absent brain damage).
And it seems as though, if pebblesorting were the kind of thing that people sometimes cared about, then the account of morality being given would necessarily say “Well, pebblesorting is part of the complex structure of human value, and morality is that structure, and therefore caring about pebblesorting is part of caring about morality.”
If this account of morality doesn’t exclude anything that people might actually care about, and it seems like it doesn’t, then “I happen to care about being moral” is a misleading thing to say. It was never possible that I might care about anything else.
Well, psychopaths don’t seem to care about morality so much. So we can at least point to morality as a particular cluster among things people care about.
That’s just it; it’s not clear to me that we can, on this account.
Sure, there are things within morality that some people care about and other people don’t. Caring about video games is an aspect of morality, for example, and some people don’t care about video games. Caring about the happiness of other people is an aspect of morality, and some people (e.g., psychopaths) don’t care about that. And so on. But the things that they care about instead are also parts of morality, on this account.
But, OK, perhaps there’s some kind of moral hierarchy on this account. Perhaps it’s not possible to “be moral” on this account without, for example, caring about the happiness of other people… perhaps that’s necessary, though not sufficient.
In which case “I happen to care about being moral” means that I happen to care about a critical subset of the important things, as opposed to not caring about those things.
In a manner of speaking, yes. Moral facts are facts about the output of a particular computation under particular conditions, so they are “part of the natural world” essentially to whatever extent you’d say the same thing about mathematical deductions. (See Math is Subjunctively Objective, Morality as Fixed Computation, and Abstracted Idealized Dynamics.)
No. Caring about people’s preferences is part of morality, and an important part, I think, but it is not the entirety of morality, or the source of morality. (I’m not sure what a “source of normativity” is; does that refer to the causal history behind someone being moved by a moral argument, or something else?)
(The “Moral facts are not written into the ‘book’ of the universe” bit is correct.)
See Inseparably Right and No License To Be Human. “Should” is not defined by your terminal values or preferences; although human minds (and things causally entangled with human minds) are the only places we can expect to find information about morality, morality is not defined by being found in human minds. It’s the other way around: you happen to care about(/prefer/terminally value) being moral. If we defined “should” such that an agent “should” do whatever satisfies its terminal values (such that pebblesorters should sort pebbles into prime heaps, etc.), then morality would be a Type 2 calculator; it would have no content, it could say anything and still be correct about the question it’s being asked. I suppose you could define “should” that way, but it’s not an adequate unpacking of what humans are actually thinking about when they talk about morality.
I endorse the above.
Thanks for this!
Concerning preferences, what else is part of morality besides preferences?
A “source of normativity” is just anything that can justify a should or ought statement. The uncontroversial example is that goals/desires/preferences can justify hypothetical ought statements (hypothetical imperatives). So Eliezer is on solid footing there.
What is debated is whether anything else can justify should or ought statements. Can categorical imperatives justify ought statements? Can divine commands do so? Can non-natural moral facts? Can intrinsic value? And if so, why is it that these things are sources of normativity but not, say, facts about which arrangements of marbles resemble Penelope Cruz when viewed from afar?
My own position is that only goals/desires/preferences provide normativity, because the other proposed sources of normativity either don’t provide normativity or don’t exist. But if Eliezer thinks that something besides goals/desires/preferences can provide normativity, I’d like to know what that is.
I’ll do some reading and see if I can figure out what your last paragraph means; thanks for the link.
“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
I don’t think introducing categories like this is helpful. There are moral arguments that move you, and a framework that responds to the right moral arguments which we term “morality”, things that should move you. The arguments are allowed to be anything (before you test them with the framework), and real humans clearly fail to be ideal implementations of the framework.
(Here, the focus is on acceptance/rejection of moral arguments; decision theory would have you generate these yourself in the way they should be considered, or even self-improve these concepts out of the system if that will make it better.)
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
Ah, have you not actually read through the whole sequence yet? I don’t recommend reading it out of order, and I do recommend reading the whole thing. Mainly because some people in this thread (and elsewhere) are giving completely wrong summaries of it, so you would probably get a much clearer picture of it from the original source.
I’ve read the series all the way through, twice, but large parts of it didn’t make sense to me. By reading the linked post again, I’m hoping to combine what you’ve said with what it says and come to some understanding.
“Inseparably Right” discusses that a bit, though again, I don’t recommend reading it out of order.
These stand out to me as wrong questions. I think the sequence mostly succeeded at dissolving them for me; “Invisible Frameworks” is probably the most focused discussion of that.
I do take some confort in the fact that at least at this point, even pros like Robin Hanson and Toby Ord couldn’t make sense of what Eliezer was arguing, even after several rounds of back-and-forth between them.
But I’ll keep trying.
I read your last paragraph 5 times now and still can’t make sense of it.
One should drink water if one wants satisfy one’s thirst. Here should is loosely used to mean that it is the optimal instrumental action to reach one’s terminal goal. One should not kill is however a psychological projection of one’s utility function. Here should means that one doesn’t want others to engage in killing. The term should is ambiguous and vague, that’s all there is to it, that’s the whole problem.
Agreed 100% with this.
Of course, it doesn’t follow that what humans talk about when we talk about morality has the properties we talk about it having, or even that it exists at all, any more than analogous things follow about what humans talk about when we talk about Santa Claus or YHWH.
To say that “I happen to care about being moral” implies that it could be some other way… that I might have happened to care about something other than being moral.
That is, it implies that instead of caring about “the life of [my] friends and [my] family and [my] Significant Other and [my]self” and etc. and etc. and etc., the superposition of which is morality (according to EY), I might have cared about… well, I don’t know, really. This account of morality is sufficiently unbounded that it’s unclear what it excludes that’s within the range of potential human values at all.
I mean, sure, it excludes sorting pebbles into prime-numbered heaps, for example. But for me to say “instead of caring about morality, I might have cared about sorting pebbles into prime-numbered heaps” is kind of misleading, since the truth is I was never going to care about it; it isn’t the sort of thing people care about. People aren’t Pebblesorters (at least, absent brain damage).
And it seems as though, if pebblesorting were the kind of thing that people sometimes cared about, then the account of morality being given would necessarily say “Well, pebblesorting is part of the complex structure of human value, and morality is that structure, and therefore caring about pebblesorting is part of caring about morality.”
If this account of morality doesn’t exclude anything that people might actually care about, and it seems like it doesn’t, then “I happen to care about being moral” is a misleading thing to say. It was never possible that I might care about anything else.
Well, psychopaths don’t seem to care about morality so much. So we can at least point to morality as a particular cluster among things people care about.
That’s just it; it’s not clear to me that we can, on this account.
Sure, there are things within morality that some people care about and other people don’t. Caring about video games is an aspect of morality, for example, and some people don’t care about video games. Caring about the happiness of other people is an aspect of morality, and some people (e.g., psychopaths) don’t care about that. And so on. But the things that they care about instead are also parts of morality, on this account.
But, OK, perhaps there’s some kind of moral hierarchy on this account. Perhaps it’s not possible to “be moral” on this account without, for example, caring about the happiness of other people… perhaps that’s necessary, though not sufficient.
In which case “I happen to care about being moral” means that I happen to care about a critical subset of the important things, as opposed to not caring about those things.
OK, fair enough. I can accept that.