Why didn’t people (apparently?) understand the metaethics sequence?
Perhaps back up a little. Does the metaethics sequence make sense? As I remember it, a fair bit of it was a long, rambling and esoteric bunch of special pleading—frequently working from premises that I didn’t share.
Long and rambling? Sure. But then so is much else in the sequences, including the quantum mechanics sequence. As for arguing from premises you don’t share, what would those premises be? It’s a sincere question, and knowing your answer would be helpful for writing my own post(s) on metaethics.
I recall not being able to identify with the premises… some of them were really quite significant.
I now recall, it was with “The Moral Void, in which apparently I had different answers than expected.
“Would you kill babies if it was inherently the right thing to do?”
The post did discuss morality on/off switches later in the context of religion, as an argument against (wishing for / wanting to find) universally compelling arguments.
The post doesn’t work for me because it seems there is an argument against the value of universally compelling arguments with the implicit assumption that since universally compelling argument don’t exist, any universally compelling argument would be false.
I happen to (mostly) agree that there aren’t universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this.
Also, there were some particular examples that didn’t work for me, since I didn’t have a spontaneous ‘ugh’ field around some of the things that were supposed to be bad.
I see Jack expressed this concept here:
And it definitely is true that much of our moral language function like rigid designators, which hides the causal history of our moral beliefs. This explains why people don’t feel like morality changes under counterfactuals—i.e. if you imagine a world in which you have a preference for innocent children being murdered you don’t believe that murdering children is therefore moral in that world. I outlined this in more detail here. I didn’t use the term ‘rigid designator’ in that post, but the point is that what we think is moral is invariant in counterfacturals.
For whatever reason, I feel like my morality changes under counterfactuals.
I happen to (mostly) agree that there aren’t universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this.
But you realize that Eliezer is arguing that there aren’t universally compelling arguments in any domain, including mathematics or science? So if that doesn’t threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?
For whatever reason, I feel like my morality changes under counterfactuals.
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?)
For whatever reason, I feel like my morality changes under counterfactuals.
Can you elaborate?
It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, “imagine that it is the inherently right thing to kill babies, ” it seems rather immediate to answer that in that case, killing babies would be inherently right.
This is also part of the second problem, where there aren’t so many things I consider inherently wrong or right … I don’t seem to have the same ugh fields as the intended audience. (One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe, for now.)
Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer—this is basically Eliezer’s example! He also says that, in principle, an argument might persuade all the people we care about.
While his point about evolution and ‘psychological unity’ seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn’t take much of either to get nearly all humans to endorse, eg, the survival of humanity—if you think that part’s unimportant, you may be forgetting Eliezer’s goal (and in the abstract, you may be thinking of a narrower range of possible minds).
One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe
How could it be true, aside from a stronger version of the previous paragraph? I don’t know if I understand what you want.
Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer -
You can’t persuade rocks either. Don’t you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?)
Okay… I need to write a post about that.
It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, “imagine that it is the inherently right thing to kill babies, ” it seems rather immediate to answer that in that case, killing babies would be inherently right.
Are you really imagining a coherent possibility, though? I mean, you could also say, “If someone tells me, ‘imagine that p & ~p,’ it seems that in that case, p & ~p.”
Are you really imagining a coherent possibility, though?
I am. It’s so easy to do I can’t begin to guess what the inferential distance is.
Wouldn’t it be inherently right to kill babies if they were going to suffer?
Wouldn’t it be inherently right to kill babies if they had negative moral value to me, such as baby mosquitoes carrying malaria?
I think it’s fair, principle of charity and all, to assume “babies” means “baby humans” specifically. A lot of things people say about babies becomes at best false, at worst profoundly incoherent, without this assumption.
But you’re right of course, that there are many scenarios in which killing human babies leads to better solutions than not killing them. Every time I consider pointing this out when this question comes up, I decide that the phrase “inherently right” is trying to do some extra work here that somehow or other excludes these cases, though I can’t really figure out how it is supposed to do that work, and it never seems likely that raising the question will get satisfying answers.
This seems like it might get back to the “terminal”/”instrumental” gulf, which is where I often part company with LW’s thinking about values.
Yeah, these were just a couple examples. (I can also imagine feeling about babies the way I feel about mosquitos with malaria. Do I have an exceptionally good imagination? As the imagined feelings become more removed from reality, the examples must get more bizarre, but that is the way with counter-factuals.) But there being ready examples isn’t the point. I am asked to consider that I have this value, and I can, there is no inherent contradiction.
Perhaps as you suggest, there is no p&-p contradiction because preserving the lives of babies is not a terminal value. And I should replace this example with an actual terminal value.
But herein lies a problem. Without objective morality, I’m pretty sure I don’t have any terminal values—everything depends on context. (I’m also not very certain what a terminal value would like if there was an objective morality.)
Could you clarify a bit? I’d be curious to hear your ethical views myself, particularly your metaethical views. I was convinced of some things by the Metaethics sequence (it convinced me that despite the is-ought distinction ethics could still exist), but I may have made a mistake so I want to know what you think.
That’s an open-ended question which I don’t have many existing public resources to address—but thanks for your interest. Very briefly:
I like evolution, Yukdowsky seems to dislike it. Ethically, Yukdowsky is an
intellectual descendant of Huxley,
while I see myself as thinking more along the lines of Kropotkin.
Yukdowsky seems to like evolutionary psychology. So far evolutionary psychology
has only really looked at human universals. To take understanding of the mind further,
it is necessary to move to a framework of gene-meme coevolution.
Evolutionary psychology is politically correct—through not examining huamn differences—but
is scientifically very limited in what it can say, because of the significance of
cultural transmission on human behaviour.
Yudkowsky likes utilitarianism. I view utilitarianism largely as a pretty unrealistic
ethical philosophy adopted by ethical philosophers for signalling reasons.
Yukdowsky is an ethical philosopher—and seems to be on a mission to persuade people
that giving control a machine that aggregates their preferences will be OK. I don’t
have a similar axe to grind.
Perhaps back up a little. Does the metaethics sequence make sense? As I remember it, a fair bit of it was a long, rambling and esoteric bunch of special pleading—frequently working from premises that I didn’t share.
Long and rambling? Sure. But then so is much else in the sequences, including the quantum mechanics sequence. As for arguing from premises you don’t share, what would those premises be? It’s a sincere question, and knowing your answer would be helpful for writing my own post(s) on metaethics.
I recall not being able to identify with the premises… some of them were really quite significant.
I now recall, it was with “The Moral Void, in which apparently I had different answers than expected.
“Would you kill babies if it was inherently the right thing to do?”
The post did discuss morality on/off switches later in the context of religion, as an argument against (wishing for / wanting to find) universally compelling arguments.
The post doesn’t work for me because it seems there is an argument against the value of universally compelling arguments with the implicit assumption that since universally compelling argument don’t exist, any universally compelling argument would be false.
I happen to (mostly) agree that there aren’t universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this.
Also, there were some particular examples that didn’t work for me, since I didn’t have a spontaneous ‘ugh’ field around some of the things that were supposed to be bad.
I see Jack expressed this concept here:
For whatever reason, I feel like my morality changes under counterfactuals.
But you realize that Eliezer is arguing that there aren’t universally compelling arguments in any domain, including mathematics or science? So if that doesn’t threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?
Can you elaborate?
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?)
It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, “imagine that it is the inherently right thing to kill babies, ” it seems rather immediate to answer that in that case, killing babies would be inherently right.
This is also part of the second problem, where there aren’t so many things I consider inherently wrong or right … I don’t seem to have the same ugh fields as the intended audience. (One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe, for now.)
Of course there aren’t. You can trivially imagine programming a computer to print, “2+2=5” and no verbal argument will persuade it to give the correct answer—this is basically Eliezer’s example! He also says that, in principle, an argument might persuade all the people we care about.
While his point about evolution and ‘psychological unity’ seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn’t take much of either to get nearly all humans to endorse, eg, the survival of humanity—if you think that part’s unimportant, you may be forgetting Eliezer’s goal (and in the abstract, you may be thinking of a narrower range of possible minds).
How could it be true, aside from a stronger version of the previous paragraph? I don’t know if I understand what you want.
You can’t persuade rocks either. Don’t you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?
Okay… I need to write a post about that.
Are you really imagining a coherent possibility, though? I mean, you could also say, “If someone tells me, ‘imagine that p & ~p,’ it seems that in that case, p & ~p.”
I am. It’s so easy to do I can’t begin to guess what the inferential distance is.
Wouldn’t it be inherently right to kill babies if they were going to suffer? Wouldn’t it be inherently right to kill babies if they had negative moral value to me, such as baby mosquitoes carrying malaria?
I think it’s fair, principle of charity and all, to assume “babies” means “baby humans” specifically. A lot of things people say about babies becomes at best false, at worst profoundly incoherent, without this assumption.
But you’re right of course, that there are many scenarios in which killing human babies leads to better solutions than not killing them. Every time I consider pointing this out when this question comes up, I decide that the phrase “inherently right” is trying to do some extra work here that somehow or other excludes these cases, though I can’t really figure out how it is supposed to do that work, and it never seems likely that raising the question will get satisfying answers.
This seems like it might get back to the “terminal”/”instrumental” gulf, which is where I often part company with LW’s thinking about values.
Yeah, these were just a couple examples. (I can also imagine feeling about babies the way I feel about mosquitos with malaria. Do I have an exceptionally good imagination? As the imagined feelings become more removed from reality, the examples must get more bizarre, but that is the way with counter-factuals.) But there being ready examples isn’t the point. I am asked to consider that I have this value, and I can, there is no inherent contradiction.
Perhaps as you suggest, there is no p&-p contradiction because preserving the lives of babies is not a terminal value. And I should replace this example with an actual terminal value.
But herein lies a problem. Without objective morality, I’m pretty sure I don’t have any terminal values—everything depends on context. (I’m also not very certain what a terminal value would like if there was an objective morality.)
Could you clarify a bit? I’d be curious to hear your ethical views myself, particularly your metaethical views. I was convinced of some things by the Metaethics sequence (it convinced me that despite the is-ought distinction ethics could still exist), but I may have made a mistake so I want to know what you think.
That’s an open-ended question which I don’t have many existing public resources to address—but thanks for your interest. Very briefly:
I like evolution, Yukdowsky seems to dislike it. Ethically, Yukdowsky is an intellectual descendant of Huxley, while I see myself as thinking more along the lines of Kropotkin.
Yukdowsky seems to like evolutionary psychology. So far evolutionary psychology has only really looked at human universals. To take understanding of the mind further, it is necessary to move to a framework of gene-meme coevolution. Evolutionary psychology is politically correct—through not examining huamn differences—but is scientifically very limited in what it can say, because of the significance of cultural transmission on human behaviour.
Yudkowsky likes utilitarianism. I view utilitarianism largely as a pretty unrealistic ethical philosophy adopted by ethical philosophers for signalling reasons.
Yukdowsky is an ethical philosopher—and seems to be on a mission to persuade people that giving control a machine that aggregates their preferences will be OK. I don’t have a similar axe to grind.