Well, I hope I explained how a denial of “moral realism” was quite compatible with the idea of moral progress.
Since that was your stated reason for denying moral progress, do you disagree with my analysis, or do you have a new reason for objecting to moral progress, or have you changed your mind about it?
I certainly don’t think there is anything wrong with the idea of moral progress in principle.
Finding some alien races, would throw the most light on the issue of convergent moral evolution—but in the mean time, our history, and the behaviour of other animals (e.g. dolphins) do offer some support for the idea, it seems to me.
Conway Morris has good examples of convergent evolution. It is a common phenomenon—and convergent moral evolution would not be particularly
surprising.
If moral behaviour arises in a space which is subject to attractors, then some
moral systems will be more widespread than others. If there is one big attractor,
then moral realism would have a concrete basis.
No, sorry, I don’t see it at all. When you say “some moralities are better than others”, better by what yardstick? If you’re not a moral realist, then everyone has their own yardstick.
I really recommend against ever using the thought-stopping phrase “political correctness” ever for any purpose, but I absolutely reject the “cultural relativism” that you attribute to me as a result, by the way. Someone performing a clitorectomy may be doing the right thing by their own lights, but by my lights they’re doing totally the wrong thing, and since my lights are what I care about I’m quite happy to step in and stop them if I have the power to, or to see them locked up for it.
To continue with your analogy, moral realists claim there is one true yardstick. If you deny that it doesn’t mean you can’t measure anything, and that all attempts are useless. For example, people could still use yardsticks if they were approximately the same length.
I’m still not catching it. There isn’t one true yardstick, but there has been moral progress. I’m guessing that this is against a yardstick which sounds a bit more “objective” when you state it, such as “maximizing happiness” or “maximising human potential” or “reducing hypocrisy” or some such. But you agree that thinking that such a yardstick is a good one is still a subjective, personal value judgement that not everyone will share, and it’s still only against such a judgement that there can be moral progress, no?
I don’t expect everyone to agree about morality. However, there are certainly common elements in the world’s moral systems—common in ways that are not explicable by cultural common descent.
Cultural evolution is usually even more blatantly directional than DNA evolution is. One obvious trend is moral evolution is its increase in size. Caveman morality was smaller than most modern moralities.
Cultural evolution also exhibits convergent evolution—like DNA evolution does.
Most likely, like DNA evolution, it will eventually slow down—as it homes in on an deep, isolated optimum.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation. If there were many different optima with wildly-different moralities, it would not. Probably an intermediate position is most realistic—with advanced moral systems agreeing on a many things—but not everything.
C.S. Lewis was, as Tim Tyler points out, a Christian, but I shall trust that we are all rational enough here to not judge the book from secondary data, when the primary source is so short, clearly written, and online. We need not don the leather cloak and posied beak to avoid contamination from the buboes of this devilish theist oozing Christian memes. It is anyway not written from a Christian viewpoint. To provide a summary would be to make soup of the soup. Those who do not wish to read that, are as capable of not reading this, which is neither written from a Christian viewpoint, nor by a Christian.
I am sufficiently persuaded that the eight heads under which he summarises the Tao can be found in all cultures everywhere: these are things that everyone thinks good. One might accuse him of starting from New Testament morality and recognising only that in his other sources, but if so, the defects are primarily of omission. For example, his Tao contains no word in praise of wisdom: such words can be found in the traditions he draws on, but are not prominent in the general doctrines of Christianity (though not absent either). His Tao is silent on temperance, determination, prudence, and excellence.
Those unfamiliar with talk of virtue can consult this handy aide-memoire and judge for themselves which of them are also to be found in all major moral systems and which are parochial. Those who know many languages might also try writing down all the names of virtues they can think of in each language: what do those lists have in common?
Here’s an experiment for everyone to try: think it good to eat babies. Don’t merely imagine thinking that: actually think it. I do not expect anyone to succeed, any more than you can look at your own blood and see it as green, or decide to believe that two and two make three.
What is the source of this universal experience?
Lewis says that the Tao exists, it is constant, and it is known to all. People and cultures differ only in how well they have apprehended it. It cannot be demonstrated to anyone, only recognised. He does not speculate in this work on where it comes from, but elsewhere he says that it is the voice of God within us. The less virtuous among us are those who hear that voice more faintly; the evil are those who do not hear it at all, or hear it and hate it. I think there will be few takers for that here.
Others say that there are objective moral facts which we discern by our moral sense, just as we discern objective physical facts by our physical senses; in both cases the relationship requires some effort to attain to the objective truth.
Others say, this is how we are made: we are so constituted as to judge some things virtuous, just as we are so constituted as to judge some things red. They may or may not give evpsych explanations of how this came to be, but whatever the explanation, we are stuck with this sense just as much as we are stuck with our experience of colour or of mathematical truth. We may arrive at moral conclusions by thought and experience, but cannot arbitrarily adopt them. Some claim to have discarded them altogether, but then, some people have managed to put their eyes out or shake their brains to pieces.
Come the Singularity, of course, all this goes by the board. Friendliness is an issue beyond just AGI.
We’re still going in circles. Optimal by what measure? By the measure of maximizes the sort of things I value? Morals have definitely got better by that measure. Please, when you reply, don’t use words like “best” or “optimal” or “merit” or any such normative phrase without specifying the measure against which you’re maximising.
Re: “Optimal by what measure? By the measure of maximizes the sort of things I value?”
No!
The basic idea is that some moral systems are better than other—in nature’s eyes. I.e. they are more likely to exist in the universe. Invoking nature as arbitrator will probably not please those who think that nature favours the immoral—but they should at least agree that nature provides a yardstick with which to measure moral systems.
I don’t have access to the details of which moral systems nature favours. If I did—and had a convincing supporting argument—there would probably be fewer debates about morality. However, the moral systems we have seen on the planet so far certainly seem to be pertinent evidence.
Measured by this standard, moral progress cannot fail to occur. In any case, that’s a measure of progress quite orthogonal to what I value, and so of course gives me no reason to celebrate moral progress.
Moral degeneration would typically correspond to devolution—which happens in highly radioactive environments, or under frequent meteorite impacts, or other
negative local environmental condittions—provided these are avoidable elsewhere.
However, we don’t see very much devolution happening on this planet—which explains why I think moral progress is happening.
I am inclined to doubt that nature’s values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values—you can reasonably be expected to agree on a number of things.
I am inclined to doubt that nature’s values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values—you can reasonably be expected to agree on a number of things.
From the perspective of the universe at large, humans are at best an interesting anomaly. Humans, plus all domesticated animals, crops, etc, compose less than 2% of the earth’s biomass. The entire biomass is a few parts per billion of the earth (maybe it’s important as a surface feature, but life is still outmassed by about a million times by the oceans and a thousand times by the atmosphere). The earth itself is a few parts per million of the solar system, which is one of several billion like it in the galaxy.
All of the mass in this galaxy, and all the other galaxy, quasars, and other visible collections of matter, are outmassed five to ten times by hydrogen atoms in intergalactic space.
And all that, all baryonic matter, composes a few percent of the mass-energy of the universe.
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution—which is basically what I was talking about.
Most living systems can be expected to seek out those conditions. If they are powerful enough to migrate, they will mostly exist where living is practical, and mostly die out under conditions which are unfavourable.
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution—which is basically what I was talking about.
If your environment is insufficiently hostile there will be no natural selection at all. Evolution does not have a direction. The life that survives survives the life that does not, does not. That’s it. Conditions are favorable for some life and unfavorable for others. There are indeed conditions where few complex, macroscopic life forms will develop-- but that is because in those conditions it is disadvantageous to be complex or macroscopic. If you live next to an underwater steam vent you’re probably the kind of thing that likes to live there and won’t do well in Monaco.
Wait a minute. This entire conversation begins with you conflating moral progress and directional evolution.
However, we don’t see very much devolution happening on this planet—which explains why I think moral progress is happening.
Is the relationship between biological and ethical evolution just an analogy or something more for you?
Then I say: what you call good biological changes other organisms would experience as negative changes and vice versa.
You throw out the thesis about evolution having a direction because life fills more and more niches and is more and more complex. If those are things that are important to you, great. But that doesn’t mean any particular organism should be excited about evolution or that there is a fact of the matter about things getting better. If you have the adaptations to survive in a complex, niche-saturated environment good for your DNA! If you don’t, you’re dead. If you like complexity things are getting better. If you don’t things are getting worse. But the ‘getting better’ or ‘getting worse’ is in your head. All that is really happening is that things are getting more complex.
And this is the point about the ‘shifting moral Zeitgeist’ (which is a perfectly fine turn of phrase btw, because it doesn’t imply the current moral Zeitgeist is any truer than the last one). Maybe you can identify trends in how values change but that doesn’t make the new values better. But since the moral Zeitgeist is defined by the moral beliefs most people hold, most people will always see moral history up to that point in time as progressive. Similarly, most young people will experience moral progress the rest of their lives as the old die out.
I think there is some kind of muddle occurring here.
I cited the material about directional evolution in response to the claim that: “Evolution does not have a direction.”
It was not to do with morality, it was to do with whether evolution is directional. I thought I made that pretty clear by quoting the specific point I was responding to.
Evolution is a gigantic optimization mechanism, a fitness maximizer. It operates in a relatively benign environment that permits cumulative evolution—thus the rather obvious evolutionary arrow.
Re: “Is the relationship between biological and ethical evolution just an analogy or something more for you?”
Ethics is part of biology, so there is at least some link. Beyond that, I am not sure what sort of analogy you are suggesting. Maybe in some evil parallel universe, morality gets progressively nastier over time. However, I am more concerned with the situation in the world we observe.
The section you quoted is out of context. I was actually explaining how the idea that “moral progress cannot fail to occur” was not a logical consequence of moral evolution—because of the possibility of moral devolution. It really is possible to look back and conclude that your ancestors had better moral standards.
I haven’t read the books, though I’m familiar with the thesis. Your essay is afaict a restatement of that thesis. Now, maybe the argument is sufficiently complex that it needs to be made in a book and I’ll remain ignorant until I get around to reading one of these books. But it would be convenient if someone could make the argument in few enough words that I don’t have to spend a month investigating it.
Nature is my candidate for providing an objective basis for morality.
Moral systems that don’t exist—or soon won’t exist—might have some interest value—but generally, it is not much use being good if you are dead.
“Might is right” does not seem like a terribly good summary of nature’s fitness criteria. They are more varied than that—e.g. see the birds of paradise—which are often more beautiful than mighty.
Isn’t this a definitional dispute? I don’t think Drescher thinks some goal system is privileged in a queer way. Timeless game theory might talk about things that sound suspiciously like objective morality (all timelessly-trading minds effectively having the same compromise goal system?), but which are still mundane facts about the multiverse and counterfactually dependent on the distribution of existing optimizers.
And there are plenty of moral realists who think that there is such a thing as morality, and our ethical theories track it, and we haven’t figured out how to fully specify it yet.
I don’t think Stefan Pernar makes much sense on this topic.
David Pearce’s position is more reasonable—and not very different from mine—since pleasure and pain (loosely speaking) are part of what nature uses to motivate and reward action in living things. However, I disagree with David on a number of things—and prefer my position. For example, I am concerned that David will create wireheads.
I don’t know about Gary’s position—but the Golden Rule is a platitude that most moral thinkers would pay lip service to—though I haven’t heard it used as a foundation of moral behaviour before. Superficially, things like sexual differences make the rule not-as-golden-as-all-that.
Also: “Some examples of robust “moral realists” include David Brink, John McDowell, Peter Railton, Geoffrey Sayre-McCord, Michael Smith, Terence Cuneo, Russ Shafer-Landau, G.E. Moore, Ayn Rand, John Finnis, Richard Boyd, Nicholas Sturgeon, and Thomas Nagel.”
Are there Christian non-nutjobs? It seems to me that Christianity poisons a person’s whole world view—rendering them intellectually untrustworthy. If they believe that, they can believe anything.
Well… yes and no. I wouldn’t trust a Christian’s ability to do good science, and I don’t think a Christian could write an AI (unless the Christianity was purely cultural and ceremonial). But Christians can and do write brilliant articles and essays on non-scientific subjects, especially philosophy. Even though I disagree with much of it, I still appreciate C.S. Lewis or G. K. Chesterton’s philosophical writing, and find it thought provoking.
In this case, the topic was moral realism. You think Christians have some worthwhile input on that? Aren’t their views on the topic based on the idea of morality coming from God on tablets of stone?
Christians believe human morality comes from god. Rather obviously disqualifies them from most sensible discussions about morality—since their views on the topic are utter nonsense.
This isn’t fully general to all Christians. For instance, my best friend is a Christian, and after prolonged questioning, I found that her morality boils down to an anti-hypocrisy sentiment and a social-contract-style framework to cover the rest of it. The anti-hypocrisy thing covers self-identified Christians obeying their own religion’s rules, but doesn’t extend them to anyone else.
You can’t read everything; you have to collect evidence on what’s going to be worth reading. A Christian on this sort of moral philosophy, I think that Lewis is often interesting but I plan to go to bed rather than read it, unless I get some extra evidence to push it the other way.
However, there are certainly common elements in the world’s moral systems—common in ways that are not explicable by cultural common descent.
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn’t mean it’s objectively right.
You don’t seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it’s location is a function of accidents in our history.
Rather obviously—since human morality is currently in a state of progressive development—it hasn’t reached any globally optimum value yet.
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn’t mean that value is objectively optimal, or the true morality.
In order to talk about moral “progress”, or an “optimum” value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don’t require morality to be anything but a property of the agents who feel compelled by it, and which don’t just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the “correct” extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
Well, of course you can define “objectively optimal morality” to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define “optimal morality” as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.
Additionally, the lengths of the yardsticks could be standardized to make them better—for example, as has actually occurred, by tying the units of “yards” to the previously-standardized metric system.
I’m not sure if there’s standard jargon for “all moralities are of equal merit” (I’m pretty sure that’s isomorphic to moral nihilism, anyway). However, people tend to read various sorts of relativism that way, and it’s not uncommon in discourse to see “Cultural relativism” to be associated with such a view.
What I was thinking of was postmodernism—in particular the sometimes-fashionable postmodern conception that all ideas are equally valid. It is a position sometimes cited in defense of the idea that science is just another belief system.
I’ve been reading that (I’m on page 87), and I haven’t gotten to a part where he explains how that makes moral progress meaningless. Why not just define moral progress sort of as extrapolated volition (without the “coherent” part)? You don’t even have to reference convergent moral evolution.
I don’t think he talks about moral progress. But the point is that no matter how abstractly you define the yardstick by which you observe it, if someone else prefers a different yardstick there’s no outside way to settle it.
I don’t think it mentions moral progress. It just seems obvious that if there is no absolute morality, then the only measures against which there has been progress are those that we choose.
Of course it isn’t “objective” or absolute. I already disclaimed moral realism (by granting arguendo the validity of the linked thesis). Why does it follow that you “can’t see how to build a useful model of ‘moral progress’”? Must any model of moral progress be universal?
It is a truism that as the norms of the majority change the majority of people will see subjective moral progress. That kind of experience is assumed once you know that moralities change. So when you use the term moral progress it is reasonable to assume you think there is some measure for that progress other than your own morality. The way you’re using the word progress is throwing a couple of us off.
I’m not talking specifically about that. Mainly what I’m wondering is what exactly motivated you to say “can’t see how …” in the first place. What makes a measure of progress that you choose (or is chosen based on some coherent subset of human moral values, etc.) somehow … less valid? not worthy of being used? something else?
It’s possible we’re violently agreeing here. By my own moral standards, and by yours, there has definitely been moral progress. Since there are no “higher” moral standards against which ours can be compared, there’s no way for my feelings about it to be found objectively wanting.
My position on these things is currently very close to that set out in THE TERRIBLE, HORRIBLE, NO GOOD, VERY BAD TRUTH ABOUT MORALITY AND WHAT TO DO ABOUT IT.
Well, I hope I explained how a denial of “moral realism” was quite compatible with the idea of moral progress.
Since that was your stated reason for denying moral progress, do you disagree with my analysis, or do you have a new reason for objecting to moral progress, or have you changed your mind about it?
I certainly don’t think there is anything wrong with the idea of moral progress in principle.
Finding some alien races, would throw the most light on the issue of convergent moral evolution—but in the mean time, our history, and the behaviour of other animals (e.g. dolphins) do offer some support for the idea, it seems to me.
Conway Morris has good examples of convergent evolution. It is a common phenomenon—and convergent moral evolution would not be particularly surprising.
If moral behaviour arises in a space which is subject to attractors, then some moral systems will be more widespread than others. If there is one big attractor, then moral realism would have a concrete basis.
No, sorry, I don’t see it at all. When you say “some moralities are better than others”, better by what yardstick? If you’re not a moral realist, then everyone has their own yardstick.
I really recommend against ever using the thought-stopping phrase “political correctness” ever for any purpose, but I absolutely reject the “cultural relativism” that you attribute to me as a result, by the way. Someone performing a clitorectomy may be doing the right thing by their own lights, but by my lights they’re doing totally the wrong thing, and since my lights are what I care about I’m quite happy to step in and stop them if I have the power to, or to see them locked up for it.
To continue with your analogy, moral realists claim there is one true yardstick. If you deny that it doesn’t mean you can’t measure anything, and that all attempts are useless. For example, people could still use yardsticks if they were approximately the same length.
I’m still not catching it. There isn’t one true yardstick, but there has been moral progress. I’m guessing that this is against a yardstick which sounds a bit more “objective” when you state it, such as “maximizing happiness” or “maximising human potential” or “reducing hypocrisy” or some such. But you agree that thinking that such a yardstick is a good one is still a subjective, personal value judgement that not everyone will share, and it’s still only against such a judgement that there can be moral progress, no?
I don’t expect everyone to agree about morality. However, there are certainly common elements in the world’s moral systems—common in ways that are not explicable by cultural common descent.
Cultural evolution is usually even more blatantly directional than DNA evolution is. One obvious trend is moral evolution is its increase in size. Caveman morality was smaller than most modern moralities.
Cultural evolution also exhibits convergent evolution—like DNA evolution does.
Most likely, like DNA evolution, it will eventually slow down—as it homes in on an deep, isolated optimum.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation. If there were many different optima with wildly-different moralities, it would not. Probably an intermediate position is most realistic—with advanced moral systems agreeing on a many things—but not everything.
(Replying again here rather than at the foot of a nugatory meta-discussion.)
I suggested C.S. Lewis’ “The Abolition of Man” as proposing a candidate for an optimum towards which moral systems have gravitated.
C.S. Lewis was, as Tim Tyler points out, a Christian, but I shall trust that we are all rational enough here to not judge the book from secondary data, when the primary source is so short, clearly written, and online. We need not don the leather cloak and posied beak to avoid contamination from the buboes of this devilish theist oozing Christian memes. It is anyway not written from a Christian viewpoint. To provide a summary would be to make soup of the soup. Those who do not wish to read that, are as capable of not reading this, which is neither written from a Christian viewpoint, nor by a Christian.
I am sufficiently persuaded that the eight heads under which he summarises the Tao can be found in all cultures everywhere: these are things that everyone thinks good. One might accuse him of starting from New Testament morality and recognising only that in his other sources, but if so, the defects are primarily of omission. For example, his Tao contains no word in praise of wisdom: such words can be found in the traditions he draws on, but are not prominent in the general doctrines of Christianity (though not absent either). His Tao is silent on temperance, determination, prudence, and excellence.
Those unfamiliar with talk of virtue can consult this handy aide-memoire and judge for themselves which of them are also to be found in all major moral systems and which are parochial. Those who know many languages might also try writing down all the names of virtues they can think of in each language: what do those lists have in common?
Here’s an experiment for everyone to try: think it good to eat babies. Don’t merely imagine thinking that: actually think it. I do not expect anyone to succeed, any more than you can look at your own blood and see it as green, or decide to believe that two and two make three.
What is the source of this universal experience?
Lewis says that the Tao exists, it is constant, and it is known to all. People and cultures differ only in how well they have apprehended it. It cannot be demonstrated to anyone, only recognised. He does not speculate in this work on where it comes from, but elsewhere he says that it is the voice of God within us. The less virtuous among us are those who hear that voice more faintly; the evil are those who do not hear it at all, or hear it and hate it. I think there will be few takers for that here.
Some—well, one, at least—reverse the arrow, saying that God is the good that we do, which presumably makes Satan the evil that we do.
Others say that there are objective moral facts which we discern by our moral sense, just as we discern objective physical facts by our physical senses; in both cases the relationship requires some effort to attain to the objective truth.
Others say, this is how we are made: we are so constituted as to judge some things virtuous, just as we are so constituted as to judge some things red. They may or may not give evpsych explanations of how this came to be, but whatever the explanation, we are stuck with this sense just as much as we are stuck with our experience of colour or of mathematical truth. We may arrive at moral conclusions by thought and experience, but cannot arbitrarily adopt them. Some claim to have discarded them altogether, but then, some people have managed to put their eyes out or shake their brains to pieces.
Come the Singularity, of course, all this goes by the board. Friendliness is an issue beyond just AGI.
We’re still going in circles. Optimal by what measure? By the measure of maximizes the sort of things I value? Morals have definitely got better by that measure. Please, when you reply, don’t use words like “best” or “optimal” or “merit” or any such normative phrase without specifying the measure against which you’re maximising.
Re: “Optimal by what measure? By the measure of maximizes the sort of things I value?”
No!
The basic idea is that some moral systems are better than other—in nature’s eyes. I.e. they are more likely to exist in the universe. Invoking nature as arbitrator will probably not please those who think that nature favours the immoral—but they should at least agree that nature provides a yardstick with which to measure moral systems.
I don’t have access to the details of which moral systems nature favours. If I did—and had a convincing supporting argument—there would probably be fewer debates about morality. However, the moral systems we have seen on the planet so far certainly seem to be pertinent evidence.
Measured by this standard, moral progress cannot fail to occur. In any case, that’s a measure of progress quite orthogonal to what I value, and so of course gives me no reason to celebrate moral progress.
Re: “moral progress cannot fail to occur”
Moral degeneration would typically correspond to devolution—which happens in highly radioactive environments, or under frequent meteorite impacts, or other negative local environmental condittions—provided these are avoidable elsewhere.
However, we don’t see very much devolution happening on this planet—which explains why I think moral progress is happening.
I am inclined to doubt that nature’s values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values—you can reasonably be expected to agree on a number of things.
From the perspective of the universe at large, humans are at best an interesting anomaly. Humans, plus all domesticated animals, crops, etc, compose less than 2% of the earth’s biomass. The entire biomass is a few parts per billion of the earth (maybe it’s important as a surface feature, but life is still outmassed by about a million times by the oceans and a thousand times by the atmosphere). The earth itself is a few parts per million of the solar system, which is one of several billion like it in the galaxy.
All of the mass in this galaxy, and all the other galaxy, quasars, and other visible collections of matter, are outmassed five to ten times by hydrogen atoms in intergalactic space.
And all that, all baryonic matter, composes a few percent of the mass-energy of the universe.
Negative?! They’re great for the bacteria that survive.
And I suspect those with “devolved” morality would feel the same way.
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution—which is basically what I was talking about.
Most living systems can be expected to seek out those conditions. If they are powerful enough to migrate, they will mostly exist where living is practical, and mostly die out under conditions which are unfavourable.
If your environment is insufficiently hostile there will be no natural selection at all. Evolution does not have a direction. The life that survives survives the life that does not, does not. That’s it. Conditions are favorable for some life and unfavorable for others. There are indeed conditions where few complex, macroscopic life forms will develop-- but that is because in those conditions it is disadvantageous to be complex or macroscopic. If you live next to an underwater steam vent you’re probably the kind of thing that likes to live there and won’t do well in Monaco.
Re: “Evolution does not have a direction.”
My essay about that: http://originoflife.net/direction/
See also, the books “Non-Zero” and “Evolution’s Arrow”.
There is no reason to associate complexity with moral progress.
Sure. The evidence for moral progress is rather different—e.g. see:
“Richard Dawkins—The Shifting Moral Zeitgeist”
http://www.youtube.com/watch?v=uwz6B8BFkb4
Wait a minute. This entire conversation begins with you conflating moral progress and directional evolution.
Is the relationship between biological and ethical evolution just an analogy or something more for you?
Then I say: what you call good biological changes other organisms would experience as negative changes and vice versa.
You throw out the thesis about evolution having a direction because life fills more and more niches and is more and more complex. If those are things that are important to you, great. But that doesn’t mean any particular organism should be excited about evolution or that there is a fact of the matter about things getting better. If you have the adaptations to survive in a complex, niche-saturated environment good for your DNA! If you don’t, you’re dead. If you like complexity things are getting better. If you don’t things are getting worse. But the ‘getting better’ or ‘getting worse’ is in your head. All that is really happening is that things are getting more complex.
And this is the point about the ‘shifting moral Zeitgeist’ (which is a perfectly fine turn of phrase btw, because it doesn’t imply the current moral Zeitgeist is any truer than the last one). Maybe you can identify trends in how values change but that doesn’t make the new values better. But since the moral Zeitgeist is defined by the moral beliefs most people hold, most people will always see moral history up to that point in time as progressive. Similarly, most young people will experience moral progress the rest of their lives as the old die out.
I think there is some kind of muddle occurring here.
I cited the material about directional evolution in response to the claim that: “Evolution does not have a direction.”
It was not to do with morality, it was to do with whether evolution is directional. I thought I made that pretty clear by quoting the specific point I was responding to.
Evolution is a gigantic optimization mechanism, a fitness maximizer. It operates in a relatively benign environment that permits cumulative evolution—thus the rather obvious evolutionary arrow.
Re: “Is the relationship between biological and ethical evolution just an analogy or something more for you?”
Ethics is part of biology, so there is at least some link. Beyond that, I am not sure what sort of analogy you are suggesting. Maybe in some evil parallel universe, morality gets progressively nastier over time. However, I am more concerned with the situation in the world we observe.
The section you quoted is out of context. I was actually explaining how the idea that “moral progress cannot fail to occur” was not a logical consequence of moral evolution—because of the possibility of moral devolution. It really is possible to look back and conclude that your ancestors had better moral standards.
We have already discussed the issue of whether organisms can be expected to see history as moral progress on this thread, starting with:
“If drift were a good hypothesis, steps “forwards” (from our POV) would be about as common as steps “backwards”.”
http://lesswrong.com/lw/1m5/savulescu_genetically_enhance_humanity_or_face/1ffn
I haven’t read the books, though I’m familiar with the thesis. Your essay is afaict a restatement of that thesis. Now, maybe the argument is sufficiently complex that it needs to be made in a book and I’ll remain ignorant until I get around to reading one of these books. But it would be convenient if someone could make the argument in few enough words that I don’t have to spend a month investigating it.
Re: “If your environment is insufficiently hostile there will be no natural selection at all.”
See Malthus on resource limitation, though.
So, “might as right” …
Nature is my candidate for providing an objective basis for morality.
Moral systems that don’t exist—or soon won’t exist—might have some interest value—but generally, it is not much use being good if you are dead.
“Might is right” does not seem like a terribly good summary of nature’s fitness criteria. They are more varied than that—e.g. see the birds of paradise—which are often more beautiful than mighty.
Ah, ok. That is enlightening. Of the Great Remaining Moral Realists, we have:
Tim Tyler: “The basic idea is that some moral systems are better than other—in nature’s eyes. I.e. they are more likely to exist in the universe.”
Stefan Pernar: “compassion as a rational moral duty irrespective of an agents level of intelligence or available resources.”
David Pearce: “Pleasure and pain are intrinsically motivating and objectively Good and Bad, respectively”
Gary Drescher: “Use the Golden Rule: treat others as you would have them treat you”
Drescher’s use of the Golden Rule comes from his views on acausal game-theoretic cooperation, not from moral realism.
But he furthermore thinks that this can be leveraged to create an objective morality.
Isn’t this a definitional dispute? I don’t think Drescher thinks some goal system is privileged in a queer way. Timeless game theory might talk about things that sound suspiciously like objective morality (all timelessly-trading minds effectively having the same compromise goal system?), but which are still mundane facts about the multiverse and counterfactually dependent on the distribution of existing optimizers.
When I spoke to Drescher at SS09 he seemed to imply a belief in moral realism. I’ll have to go read good and real to see what he actually says.
And there are plenty of moral realists who think that there is such a thing as morality, and our ethical theories track it, and we haven’t figured out how to fully specify it yet.
I don’t think Stefan Pernar makes much sense on this topic.
David Pearce’s position is more reasonable—and not very different from mine—since pleasure and pain (loosely speaking) are part of what nature uses to motivate and reward action in living things. However, I disagree with David on a number of things—and prefer my position. For example, I am concerned that David will create wireheads.
I don’t know about Gary’s position—but the Golden Rule is a platitude that most moral thinkers would pay lip service to—though I haven’t heard it used as a foundation of moral behaviour before. Superficially, things like sexual differences make the rule not-as-golden-as-all-that.
Also: “Some examples of robust “moral realists” include David Brink, John McDowell, Peter Railton, Geoffrey Sayre-McCord, Michael Smith, Terence Cuneo, Russ Shafer-Landau, G.E. Moore, Ayn Rand, John Finnis, Richard Boyd, Nicholas Sturgeon, and Thomas Nagel.”
Here is one proposed candidate for that optimum.
That link is to “C.S. Lewis’s THE ABOLITION OF MAN”.
And I would be interested to know what people think of Lewis’ Tao, and the arguments he makes for it.
Since:
http://en.wikipedia.org/wiki/C._S._Lewis#Conversion_to_Christianity
...I figure there would need to be clearly-evident redeeming features for anyone here to bother.
Meh. If someone being a theist were enough reason to not bother reading their arguments, we wouldn’t read much at all.
You have to filter crap out somehow.
Using “christian nutjob” as one of my criteria usually seems to work pretty well for me. Doesn’t everyone do that?
C. S. Lewis is a Christian, but hardly a nutjob. I filter out Christian nutjobs, but not all Christians.
Are there Christian non-nutjobs? It seems to me that Christianity poisons a person’s whole world view—rendering them intellectually untrustworthy. If they believe that, they can believe anything.
Looking at:
http://en.wikipedia.org/wiki/C._S._Lewis#The_Christian_apologist
...there seems to be a fair quantity of nutjobbery to me.
Except insofar as Christianity is a form of nutjobbery, of course.
Well… yes and no. I wouldn’t trust a Christian’s ability to do good science, and I don’t think a Christian could write an AI (unless the Christianity was purely cultural and ceremonial). But Christians can and do write brilliant articles and essays on non-scientific subjects, especially philosophy. Even though I disagree with much of it, I still appreciate C.S. Lewis or G. K. Chesterton’s philosophical writing, and find it thought provoking.
In this case, the topic was moral realism. You think Christians have some worthwhile input on that? Aren’t their views on the topic based on the idea of morality coming from God on tablets of stone?
No, no more than we believe that monkeys turn into humans.
Christians believe human morality comes from god. Rather obviously disqualifies them from most sensible discussions about morality—since their views on the topic are utter nonsense.
This isn’t fully general to all Christians. For instance, my best friend is a Christian, and after prolonged questioning, I found that her morality boils down to an anti-hypocrisy sentiment and a social-contract-style framework to cover the rest of it. The anti-hypocrisy thing covers self-identified Christians obeying their own religion’s rules, but doesn’t extend them to anyone else.
You can’t read everything; you have to collect evidence on what’s going to be worth reading. A Christian on this sort of moral philosophy, I think that Lewis is often interesting but I plan to go to bed rather than read it, unless I get some extra evidence to push it the other way.
FWIW, I recommend it.
AFAIR, that, the Narnia stories, and the Ransom trilogy are the only Lewis I’ve read. Are there others you have found interesting?
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn’t mean it’s objectively right.
You don’t seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it’s location is a function of accidents in our history.
Rather obviously—since human morality is currently in a state of progressive development—it hasn’t reached any globally optimum value yet.
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn’t mean that value is objectively optimal, or the true morality.
In order to talk about moral “progress”, or an “optimum” value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don’t require morality to be anything but a property of the agents who feel compelled by it, and which don’t just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the “correct” extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
Well, of course you can define “objectively optimal morality” to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define “optimal morality” as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.
Additionally, the lengths of the yardsticks could be standardized to make them better—for example, as has actually occurred, by tying the units of “yards” to the previously-standardized metric system.
I was criticising the idea that “all moralities are of equal merit”. I was not attributing that idea to you. Looking at:
http://en.wikipedia.org/wiki/Cultural_relativism
...it looks like I used the wrong term.
http://en.wikipedia.org/wiki/Moral_relativism
...looks slightly better—but still is not quite the concept I was looking for—I give up for the moment.
I’m not sure if there’s standard jargon for “all moralities are of equal merit” (I’m pretty sure that’s isomorphic to moral nihilism, anyway). However, people tend to read various sorts of relativism that way, and it’s not uncommon in discourse to see “Cultural relativism” to be associated with such a view.
Believing that all moralities are of equal merit is a particularly insane brand of moral realism.
What I was thinking of was postmodernism—in particular the sometimes-fashionable postmodern conception that all ideas are equally valid. It is a position sometimes cited in defense of the idea that science is just another belief system.
Thanks for that link: I had seen that mentioned before and had wanted to read it.
I’ve been reading that (I’m on page 87), and I haven’t gotten to a part where he explains how that makes moral progress meaningless. Why not just define moral progress sort of as extrapolated volition (without the “coherent” part)? You don’t even have to reference convergent moral evolution.
I don’t think he talks about moral progress. But the point is that no matter how abstractly you define the yardstick by which you observe it, if someone else prefers a different yardstick there’s no outside way to settle it.
I don’t think it mentions moral progress. It just seems obvious that if there is no absolute morality, then the only measures against which there has been progress are those that we choose.
Of course it isn’t “objective” or absolute. I already disclaimed moral realism (by granting arguendo the validity of the linked thesis). Why does it follow that you “can’t see how to build a useful model of ‘moral progress’”? Must any model of moral progress be universal?
It is a truism that as the norms of the majority change the majority of people will see subjective moral progress. That kind of experience is assumed once you know that moralities change. So when you use the term moral progress it is reasonable to assume you think there is some measure for that progress other than your own morality. The way you’re using the word progress is throwing a couple of us off.
If you’re talking about progress relative to my values, then absolutely there has been huge progress.
I’m not talking specifically about that. Mainly what I’m wondering is what exactly motivated you to say “can’t see how …” in the first place. What makes a measure of progress that you choose (or is chosen based on some coherent subset of human moral values, etc.) somehow … less valid? not worthy of being used? something else?
It’s possible we’re violently agreeing here. By my own moral standards, and by yours, there has definitely been moral progress. Since there are no “higher” moral standards against which ours can be compared, there’s no way for my feelings about it to be found objectively wanting.