I can’t put it into words, but I feel like not having slaves and not allowing rape within marriage are both good things that are morally superior for reasons beyond simply “I believe this and people long-ago didn’t”.
The process whereby things like this occur are what I’d call “human moral development”.
So we have a mysterious process that with some deviations has generally over time made values more like those that we have today. Looking back at the steps of change we get the feeling that somehow this looks right.
Very well, considering that we here at LW should be particularly familiar with the power of the human mind to construct convincing narratives for nearly any difficult to predict sequence of events in hindsight and considering that we know of biases that are strong enough to give us that “morally superior for reasons beyond simply they are different” feeling (halo effect for starters) and do indeed give us such feelings on some other matters, I hope I am not to bold to ask...
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space? Now naturally a random walk through value space dosen’t sound like something to which you are willing to outsource future moral and value development. But then why is unknown process X that happens to make you feel sort of good, because you like what its done so far, something which inspires so much confidence that you’d like a godlike AI to emulate (quite closely) its output?
Sure its better in the Bayesian sense than a process who’s output so far you wouldn’t have liked, but we don’t have any empirical comparison of results to an alternative process, or do we? Also consider other changes, that feel right in the merely because its more similar to us way. It seems plausible that these kinds of changes of values and morality might indeed be far more common. Even if these changes are something that our ancestors would have found to be neutral changes (which seems highly doubtful), they are clearly hijacking our attention away from the fuzzy category of “right in a qualitative different way than just similar to my own” that is put as the basis of going with the current process.
But again perhaps I simply feel discomforted by such implicit narratives of moral progress considering that North Korean society has demonstrably constructed a narrative with itself at the apex that feels just as good from the inside as does ours. Considering similar comments of mine have been up voted in the past, I think at the very least a substantial minority agrees with me that the standard LW discourse and state of thought on this matter is woefully inadequate. I mean how is it possible that this process apparently inspires such confidence in LWers, while a process that has so far also given us comparably felicitous change that feels so right to us humans that we often invoke an omnipotent benevolent agent to explain the result, can terrify us once we think about it clear-headedly?
I have a hunch that if we looked at the guts of this process we may find more old sanity shattering Outer Gods waiting for us.
PS Would anyone be interested in a top level/discussion post of some of my more advanced thoughts and arguments on this? Or have I just been sloppy and missed relevant material that covered this? :)
Edit: This comment was adapted as an article for More Right, where I will be writting a full sequence on my thoughts on metaethics.
I have a reason to believe that there is such a thing as moral progress, and it’s not completely arbitrary. The reason is not merely that I feel good about my own morality. But I have trouble explaining it in a couple sentences; there is just too much inferential distance.
(It has to do with the fact that making friends with people of different nationalities or ethnicties reliably makes people less nationalist or racist, and inoculates them against experiences that foster nationalism or racism.)
If you write a post about this, maybe I will write a response post.
(It has to do with the fact that making friends with people of different nationalities or ethnicties reliably makes people less nationalist or racist, and inoculates them against experiences that foster nationalism or racism.)
I don’t see why ever expanding circles of outgroups becoming ingroups qualifies as something that is always objectivley better. To be honest I’m not so sure this is even occuring.
But I have trouble explaining it in a couple sentences; there is just too much inferential distance.
If you write a post about this, maybe I will write a response post.
I’m really interested in getting a better understanding of the problems involved so, if I do end up writing a post about this (I need to get better acquainted with the newer meta ethics material and do some research before I feel comfortable doing so), please do! :)
PS Would anyone be interested in a top level/discussion post of some of my more advanced thoughts and arguments on this? Or have I just been sloppy and missed relevant material that covered this? :)
I don’t think anything like this has been posted before, or has it? I do agree most posters haven’t devoted too much tough to it. I mean how can they be so certain this process is something worth keeping and something that works on all of mankind, and would still be here even if a few random events in our evolutionary past or even written history had happened differently, yet are so sure that it would not apply for AI? Think about that for a little bit, practically everyone agrees that FAI is important precisely because they are sure this process isn’t going to kick in for the AI. But also most seem to think that its guaranteed to have acted on us in some way even if we as humans had a very different history (the only alternative to this interpretation is anthropics, it feels right to us because we are a in a very very luck universe where the conditions where just right so the process is turning out fine). For that matter they seem to implicitly think this process is much stronger or at the very least at least as strong as genetic (since we can now be pretty sure humans have been changing in biologically even in recorded history) and memetic evolution on the scale of a few centuries or millennia.
I can’t put it into words, but I feel like not having slaves and not allowing rape within marriage are both good things that are morally superior for reasons beyond simply “I believe this and people long-ago didn’t”.
I mean how is it possible that this process inspires such confidence in us, while a process that has so far also given us comparably felicitous change that feels so right we often invoke a omnipotent benevolent being to explain it, evolution can terrify us one we think about it clear-headedly?
I have a hunch that if we looked at the guts of this process we may find more old sanity shattering Outer Gods waiting for us.
Considering established nomenclature perhaps we should call it Yidhra or the Nameless Mist. ^_^
Members of Yidhra’s cult can gain immortality by merging with her, though they become somewhat like Yidhra as a consequence. … She usually conceals her true form behind a powerful illusion, appearing as a comely young woman; only favored members of her cult can see her as she actually is.
Anyone who passes his utility function over to her wisdom’s modification is basically home safe, because future development will overall still be progress to his eyes. Moral judgement becomes a snap as all you need to do is wait for a sufficiently long time for society to get more stuff “right”, the stuff that isn’t “right” in that way and is lost is just random baggage you shouldn’t value anyway.
Thanks! I must admit I’m behind my reading on the metaethics stuff. Some of the other sequences where much more interesting for me personally and until recently I was binging on them with little regard for anything else.
Edit: Interesting the article barley has a few upvotes, considering this is EY, this increases the P that it probably hasn’t been that well read or discussed in the last year or two.
Why doesn’t a parallel argument apply to material and scientific progress?
Presumably because it is possible to objectively assess the degree of material and scientific progress (whether they are good is another matter). We can tell that our current knowledge is better because we can say why it is better. If there were no no epistemological progress, LW would be in vain!
So presumably the argument that there is no moral progress hinges on morality being something that can’t be objectively arrived at or verified. But examples of rational discussion of morality abound, not least on LW. If we can explain our morlity better than our predecessors we are justified in thinking it is better. (But progress in morality is not quite the same as progress in values. The values might be remain the same, with moral progress consisting of
a better expression of those values).
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space?
If you use this analogy again in the future, you may want to be more precise.
Maybe by “random walk” you convey the idea that human moral change is nondeterministic, or depends on conditions in such a way that change in one direction is just as likely as change in another direction;
Or maybe by “random walk” you mean that human moral change is robust and deterministic, but does not approach any kind of limit, or approaches a limit that is very far from our current position in values-space.
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space?
If historical civilizations agree with us about as much as contemporary ones. That said, there has no more been a constant upward slope than there has been such a slope in technology of equality, we are just unusual due to the enlightenment, I think.
EDIT: I think you may be assuming that we perfectly understand what we want. If I persuade a racist all humans are people, have I changed his utility function?
He gets disutility from people suffering. If Jews are people, then he shouldn’t torture them to death—but he didn’t suddenly decide to value Jews, just realised they are people.
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space? Now naturally a random walk through value space dosen’t sound like something to which you are willing to outsource future moral and value development. But then why is unknown process X that happens to make you feel sort of good, because you like what its done so far, something which inspires so much confidence that you’d like a godlike AI to emulate (quite closely) its output?
What do you mean by ‘value space’ (any human values or desires?) and ‘moral change’ (generally desirable human values?)? Also, a godlike AI is like playing with infinities when inserted in your moral calculus; a godlike AI leading to horrible consequences per your morality doesn’t show that your morality was fully flawed (maybe there was a tiny loophole no human would try or be capable of exploiting). And I think you discount the possibility that there are many moral solutions like there are many solutions to chess (this is especially important when noting the impact of culture on values).
This is a possible case of a currently ongoing mild value change in the US:
What I am proposing here is that for most Americans multi-generational living is a means toward maintaining the lifestyle and values which they hold dear, but the shift itself may change that lifestyle and those values in deep and fundamental ways. The initial trigger here is economic, with the first-order causal effects sociological. But the downstream effects may also be economic, as Americans become less mobile and more familialist. What can we expect? Look abroad, and look to the past.
Is this “generally desirable human values”? Depends on the values you already hold. I certainly think radically undesirable moral change might occur, looking from the value sets of past humans we see it almost certainly would not be a freak occurrence.
My key argument is that we have very little idea bout the mechanics of moral change in real human societies. Thus future moral change is fundamentally unsafe from our current value set and we do not have the tools to do anything about it. Yet. Once we get them excluding us realizing the process we are currently chained to has some remarkable properties we like, we will probably do away with it.
If moral progress is a salvageable concept then we shall see it for the first time in the history of mankind. If not we will finally do away with the tragedy of being doomed to an alien future devoid of all we value.
Of course “we” is misleading. The society that eventually gets these tools might be one that has values that are quite worthless or even horrifying from our perspective.
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space?
I don’t know, but I think that might be some of the stuff that Luke will be working on as a researcher for SIAI.
Related to: List of public drafts on LessWrong
So we have a mysterious process that with some deviations has generally over time made values more like those that we have today. Looking back at the steps of change we get the feeling that somehow this looks right.
Very well, considering that we here at LW should be particularly familiar with the power of the human mind to construct convincing narratives for nearly any difficult to predict sequence of events in hindsight and considering that we know of biases that are strong enough to give us that “morally superior for reasons beyond simply they are different” feeling (halo effect for starters) and do indeed give us such feelings on some other matters, I hope I am not to bold to ask...
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space? Now naturally a random walk through value space dosen’t sound like something to which you are willing to outsource future moral and value development. But then why is unknown process X that happens to make you feel sort of good, because you like what its done so far, something which inspires so much confidence that you’d like a godlike AI to emulate (quite closely) its output?
Sure its better in the Bayesian sense than a process who’s output so far you wouldn’t have liked, but we don’t have any empirical comparison of results to an alternative process, or do we? Also consider other changes, that feel right in the merely because its more similar to us way. It seems plausible that these kinds of changes of values and morality might indeed be far more common. Even if these changes are something that our ancestors would have found to be neutral changes (which seems highly doubtful), they are clearly hijacking our attention away from the fuzzy category of “right in a qualitative different way than just similar to my own” that is put as the basis of going with the current process.
But again perhaps I simply feel discomforted by such implicit narratives of moral progress considering that North Korean society has demonstrably constructed a narrative with itself at the apex that feels just as good from the inside as does ours. Considering similar comments of mine have been up voted in the past, I think at the very least a substantial minority agrees with me that the standard LW discourse and state of thought on this matter is woefully inadequate. I mean how is it possible that this process apparently inspires such confidence in LWers, while a process that has so far also given us comparably felicitous change that feels so right to us humans that we often invoke an omnipotent benevolent agent to explain the result, can terrify us once we think about it clear-headedly?
I have a hunch that if we looked at the guts of this process we may find more old sanity shattering Outer Gods waiting for us.
PS Would anyone be interested in a top level/discussion post of some of my more advanced thoughts and arguments on this? Or have I just been sloppy and missed relevant material that covered this? :)
Edit: This comment was adapted as an article for More Right, where I will be writting a full sequence on my thoughts on metaethics.
I have a reason to believe that there is such a thing as moral progress, and it’s not completely arbitrary. The reason is not merely that I feel good about my own morality. But I have trouble explaining it in a couple sentences; there is just too much inferential distance.
(It has to do with the fact that making friends with people of different nationalities or ethnicties reliably makes people less nationalist or racist, and inoculates them against experiences that foster nationalism or racism.)
If you write a post about this, maybe I will write a response post.
I don’t see why ever expanding circles of outgroups becoming ingroups qualifies as something that is always objectivley better. To be honest I’m not so sure this is even occuring.
I’m really interested in getting a better understanding of the problems involved so, if I do end up writing a post about this (I need to get better acquainted with the newer meta ethics material and do some research before I feel comfortable doing so), please do! :)
Yes! Please write this post!
I don’t think anything like this has been posted before, or has it? I do agree most posters haven’t devoted too much tough to it. I mean how can they be so certain this process is something worth keeping and something that works on all of mankind, and would still be here even if a few random events in our evolutionary past or even written history had happened differently, yet are so sure that it would not apply for AI? Think about that for a little bit, practically everyone agrees that FAI is important precisely because they are sure this process isn’t going to kick in for the AI. But also most seem to think that its guaranteed to have acted on us in some way even if we as humans had a very different history (the only alternative to this interpretation is anthropics, it feels right to us because we are a in a very very luck universe where the conditions where just right so the process is turning out fine). For that matter they seem to implicitly think this process is much stronger or at the very least at least as strong as genetic (since we can now be pretty sure humans have been changing in biologically even in recorded history) and memetic evolution on the scale of a few centuries or millennia.
Considering established nomenclature perhaps we should call it Yidhra or the Nameless Mist. ^_^
Anyone who passes his utility function over to her wisdom’s modification is basically home safe, because future development will overall still be progress to his eyes. Moral judgement becomes a snap as all you need to do is wait for a sufficiently long time for society to get more stuff “right”, the stuff that isn’t “right” in that way and is lost is just random baggage you shouldn’t value anyway.
Eliezer gave it brief mention in his metaethics sequence, in posts such as Whither Moral Progress?
Thanks for the link!
I recalled reading something like that on OB, I think this is where I first stumbled upon the “random walk morality” challenge.
Thanks! I must admit I’m behind my reading on the metaethics stuff. Some of the other sequences where much more interesting for me personally and until recently I was binging on them with little regard for anything else.
Edit: Interesting the article barley has a few upvotes, considering this is EY, this increases the P that it probably hasn’t been that well read or discussed in the last year or two.
Why doesn’t a parallel argument apply to material and scientific progress?
Presumably because it is possible to objectively assess the degree of material and scientific progress (whether they are good is another matter). We can tell that our current knowledge is better because we can say why it is better. If there were no no epistemological progress, LW would be in vain!
So presumably the argument that there is no moral progress hinges on morality being something that can’t be objectively arrived at or verified. But examples of rational discussion of morality abound, not least on LW. If we can explain our morlity better than our predecessors we are justified in thinking it is better. (But progress in morality is not quite the same as progress in values. The values might be remain the same, with moral progress consisting of a better expression of those values).
Because airplanes fly.
If you use this analogy again in the future, you may want to be more precise.
Maybe by “random walk” you convey the idea that human moral change is nondeterministic, or depends on conditions in such a way that change in one direction is just as likely as change in another direction;
Or maybe by “random walk” you mean that human moral change is robust and deterministic, but does not approach any kind of limit, or approaches a limit that is very far from our current position in values-space.
If historical civilizations agree with us about as much as contemporary ones. That said, there has no more been a constant upward slope than there has been such a slope in technology of equality, we are just unusual due to the enlightenment, I think.
EDIT: I think you may be assuming that we perfectly understand what we want. If I persuade a racist all humans are people, have I changed his utility function?
Do you to the first approximation equate moral progress with more equality?
I would consider it one form of such progress, yes.
EDIT:
That was a genuine question, incidentally. I really want to know your answer.
Can you taboo what you mean by a person’s utility function?
How they decide the relative desirability of a given situation.
Said procedure tends not to resemble a utility function.
Huh?
He gets disutility from people suffering. If Jews are people, then he shouldn’t torture them to death—but he didn’t suddenly decide to value Jews, just realised they are people.
What do you mean by ‘value space’ (any human values or desires?) and ‘moral change’ (generally desirable human values?)? Also, a godlike AI is like playing with infinities when inserted in your moral calculus; a godlike AI leading to horrible consequences per your morality doesn’t show that your morality was fully flawed (maybe there was a tiny loophole no human would try or be capable of exploiting). And I think you discount the possibility that there are many moral solutions like there are many solutions to chess (this is especially important when noting the impact of culture on values).
This is a possible case of a currently ongoing mild value change in the US:
Is this “generally desirable human values”? Depends on the values you already hold. I certainly think radically undesirable moral change might occur, looking from the value sets of past humans we see it almost certainly would not be a freak occurrence.
My key argument is that we have very little idea bout the mechanics of moral change in real human societies. Thus future moral change is fundamentally unsafe from our current value set and we do not have the tools to do anything about it. Yet. Once we get them excluding us realizing the process we are currently chained to has some remarkable properties we like, we will probably do away with it.
If moral progress is a salvageable concept then we shall see it for the first time in the history of mankind. If not we will finally do away with the tragedy of being doomed to an alien future devoid of all we value.
Of course “we” is misleading. The society that eventually gets these tools might be one that has values that are quite worthless or even horrifying from our perspective.
Does technological progress giving us the necessary leisure time and surplus resources to care about morality count as moral progress?
I don’t know, but I think that might be some of the stuff that Luke will be working on as a researcher for SIAI.