I’m continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?
The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people interested in rationality is to explore how to live without morality. Many “rationalists” instead go into denial and try to construct some supposedly rational form of morality, more often than not suspiciously similar to the traditional ideas. I’m not sure whether or not Eliezer’s metaethical project is an example of this, but in any case he is commendable for taking the issues very seriously. Most other LW:ers seem to be far too uncritical toward their moral prejudices.
I think you need to define what you mean by “morality” a lot more carefully. It’s hard to attribute meaning to the statement “People should act without morals.” Even if you mean “Everyone should act strictly within their own self-interest”, evolutionary psychology would demand that you define the unit of identity (the body? the gene?), and would smuggle most of what we think of as “morality” back into “self-interest”.
Moral skepticism is not particularly impressive as it’s the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that’s it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form “A person should act in such-and-such manner in order to achieve some outcome”.
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems—which is the challenge I propose that rationalists should take on.
You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn’t make it so. You may find it disturbing that you can’t “non-arbitrarily” say that “striving for truth is better than killing babies”. This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn’t healthy). I don’t mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don’t believe this makes me irrational. In fact, given our current understanding of the problem, I don’t know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal’s wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
I’ve read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I’m sure he believes that he brings some new insights, but I would disagree.
My position may be one of those you criticize. I believe something that bears an approximation to “morality” is both worth adhering to and important.
I think a particular kind of morality helps human societies win.
Morality, as I understand it, consists of a set of constraints on acceptable utility functions combined with observable signals of those constraints.
Do I believe that this type of morality is in any sense ultimately correct? No. In a technical sense, I am a complete and total moral skeptic.
However, I do think publicly-observable moral behavior is useful for coordination and cooperation, among other things. To the extent that this makes us better off—to the extent it makes me better off—I would certainly think that even a moral skeptic might find it interesting.
Perhaps LWers are “too uncritical toward their moral prejudices.” But it’s at least worth examining which of those “moral prejudices” are useful, where this doesn’t conflict with other, more deeply held values.
Finally, morality broadly enough construed is a condition of rationality: if morality is taken to simply be your set of values and preferences, then it is literally necessary to a well-defined utility function, which is itself (arguably) a necessary component of rationality.
It seems to me that your position can be interpreted in at least two ways.
Firstly, you might mean that it is useful to have common standards for behavior to make society run more smoothly and peacefully. I think almost everyone would agree with this, but these common standards might be non-moral. People might consider them simple social convections that they adopt for reasons of self-interest (to make their interactions with society flow more smoothly), but that have no special metaphysical status and do not supersede their personal values if a conflict arises.
Secondly, you might mean that it is useful that people in general are moral realists. The question then remains how you yourself, being “a complete and total moral skeptic”, relate to questions of morality in your own life and in communication with people holding similar views. Do you make statements about what is morally right or wrong? Do you blame yourself or others for breaking moral rules? Perhaps you don’t, but I get the impression that many LW:ers do. (In the recent survey, only 10.9% reported that they do not believe in morality, while over 80% reported themselves to support some moral theory.)
In regards to the second interpretation, one might also ask: If it works for you to be a moral skeptic in a world of moral realists, why shouldn’t it work for other people too? Why wouldn’t it work for all people? More to the point, I don’t think that morality is very useful. Despite what some feared, people didn’t become monsters when they stopped believing in God, and their societies didn’t collapse. I don’t think any of these things will happen when they stop believing in morality either.
I don’t think they do have any “special metaphysical status,” and indeed I agree that they are “simple social conventions.” Do I make statements about moral rights and wrongs? Only by reference to a framework that I believe the audience accepts. In LWs case, this seems broadly to be utilitarian or some variant.
That’s precisely my point—morality doesn’t have to have any metaphysical status. Perhaps the problem is simply that we haven’t defined the term well enough. Regardless, I suspect that more than a few LWers are moral skeptics, in that they don’t hold any particular philosophy to be universally, metaphysically right, but they personally value social well-being in some form, and so we can usually assume that helping humanity would be considered positively by a LW audience.
As long as everyone’s “personal values” are roughly compatible with the maintenance of society, then yes, losing the sense of morality that excludes such values may not be a problem. I was simply including the belief that personal values should not produce antisocial utility functions (that is, utility functions that have a positive term for another person’s suffering) as morality.
Do I think that these things are metaphysically supported? No. But do I think that with fewer prosocial utility functions, we would likely see much lower utilities for most people? Yes.
Of course, whether you care about that depends on how much of a utilitarian you are.
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call “reductionism” (perhaps closer to Daniel Dennetts “greedy reductionism” than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism.
Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.
First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).
We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.
Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.
By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand—the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.
It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)
I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.
[Later edit: I’ve convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more “paperclipness” without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)
No, reductionism doesn’t lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn’t exist.
I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality—part of the consistent ideology.
However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the “existence” of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)
Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself “false”—it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied.
Moral skeptics aren’t objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can’t scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can’t rationally mean anything more.
I’m continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?
The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people interested in rationality is to explore how to live without morality. Many “rationalists” instead go into denial and try to construct some supposedly rational form of morality, more often than not suspiciously similar to the traditional ideas. I’m not sure whether or not Eliezer’s metaethical project is an example of this, but in any case he is commendable for taking the issues very seriously. Most other LW:ers seem to be far too uncritical toward their moral prejudices.
I think you need to define what you mean by “morality” a lot more carefully. It’s hard to attribute meaning to the statement “People should act without morals.” Even if you mean “Everyone should act strictly within their own self-interest”, evolutionary psychology would demand that you define the unit of identity (the body? the gene?), and would smuggle most of what we think of as “morality” back into “self-interest”.
Moral skepticism is not particularly impressive as it’s the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that’s it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form “A person should act in such-and-such manner in order to achieve some outcome”.
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems—which is the challenge I propose that rationalists should take on.
You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn’t make it so. You may find it disturbing that you can’t “non-arbitrarily” say that “striving for truth is better than killing babies”. This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn’t healthy). I don’t mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don’t believe this makes me irrational. In fact, given our current understanding of the problem, I don’t know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal’s wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
OB: “Arbitrary”
(Wait, Eliezer’s OB posts have been imported to LW? Win!)
I’ve read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I’m sure he believes that he brings some new insights, but I would disagree.
My position may be one of those you criticize. I believe something that bears an approximation to “morality” is both worth adhering to and important.
I think a particular kind of morality helps human societies win.
Morality, as I understand it, consists of a set of constraints on acceptable utility functions combined with observable signals of those constraints.
Do I believe that this type of morality is in any sense ultimately correct? No. In a technical sense, I am a complete and total moral skeptic.
However, I do think publicly-observable moral behavior is useful for coordination and cooperation, among other things. To the extent that this makes us better off—to the extent it makes me better off—I would certainly think that even a moral skeptic might find it interesting.
Perhaps LWers are “too uncritical toward their moral prejudices.” But it’s at least worth examining which of those “moral prejudices” are useful, where this doesn’t conflict with other, more deeply held values.
Finally, morality broadly enough construed is a condition of rationality: if morality is taken to simply be your set of values and preferences, then it is literally necessary to a well-defined utility function, which is itself (arguably) a necessary component of rationality.
It seems to me that your position can be interpreted in at least two ways.
Firstly, you might mean that it is useful to have common standards for behavior to make society run more smoothly and peacefully. I think almost everyone would agree with this, but these common standards might be non-moral. People might consider them simple social convections that they adopt for reasons of self-interest (to make their interactions with society flow more smoothly), but that have no special metaphysical status and do not supersede their personal values if a conflict arises.
Secondly, you might mean that it is useful that people in general are moral realists. The question then remains how you yourself, being “a complete and total moral skeptic”, relate to questions of morality in your own life and in communication with people holding similar views. Do you make statements about what is morally right or wrong? Do you blame yourself or others for breaking moral rules? Perhaps you don’t, but I get the impression that many LW:ers do. (In the recent survey, only 10.9% reported that they do not believe in morality, while over 80% reported themselves to support some moral theory.)
In regards to the second interpretation, one might also ask: If it works for you to be a moral skeptic in a world of moral realists, why shouldn’t it work for other people too? Why wouldn’t it work for all people? More to the point, I don’t think that morality is very useful. Despite what some feared, people didn’t become monsters when they stopped believing in God, and their societies didn’t collapse. I don’t think any of these things will happen when they stop believing in morality either.
I don’t think they do have any “special metaphysical status,” and indeed I agree that they are “simple social conventions.” Do I make statements about moral rights and wrongs? Only by reference to a framework that I believe the audience accepts. In LWs case, this seems broadly to be utilitarian or some variant.
That’s precisely my point—morality doesn’t have to have any metaphysical status. Perhaps the problem is simply that we haven’t defined the term well enough. Regardless, I suspect that more than a few LWers are moral skeptics, in that they don’t hold any particular philosophy to be universally, metaphysically right, but they personally value social well-being in some form, and so we can usually assume that helping humanity would be considered positively by a LW audience.
As long as everyone’s “personal values” are roughly compatible with the maintenance of society, then yes, losing the sense of morality that excludes such values may not be a problem. I was simply including the belief that personal values should not produce antisocial utility functions (that is, utility functions that have a positive term for another person’s suffering) as morality.
Do I think that these things are metaphysically supported? No. But do I think that with fewer prosocial utility functions, we would likely see much lower utilities for most people? Yes.
Of course, whether you care about that depends on how much of a utilitarian you are.
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call “reductionism” (perhaps closer to Daniel Dennetts “greedy reductionism” than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.
First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).
We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.
Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.
By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand—the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.
It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)
I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.
[Later edit: I’ve convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more “paperclipness” without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)
No, reductionism doesn’t lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn’t exist.
I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality—part of the consistent ideology.
However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the “existence” of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)
Eliminative materialism?
Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself “false”—it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied.
Moral skeptics aren’t objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can’t scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can’t rationally mean anything more.