Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call “reductionism” (perhaps closer to Daniel Dennetts “greedy reductionism” than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism.
Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.
First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).
We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.
Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.
By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand—the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.
It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)
I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.
[Later edit: I’ve convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more “paperclipness” without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)
No, reductionism doesn’t lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn’t exist.
I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality—part of the consistent ideology.
However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the “existence” of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)
Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself “false”—it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied.
Moral skeptics aren’t objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can’t scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can’t rationally mean anything more.
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call “reductionism” (perhaps closer to Daniel Dennetts “greedy reductionism” than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.
First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).
We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.
Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.
By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand—the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.
It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)
I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.
[Later edit: I’ve convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more “paperclipness” without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)
No, reductionism doesn’t lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn’t exist.
I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality—part of the consistent ideology.
However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the “existence” of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)
Eliminative materialism?
Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself “false”—it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied.
Moral skeptics aren’t objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can’t scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can’t rationally mean anything more.