It is extremely interesting to see the attempts of the community to justify through or extract values from rationality. I have been pointing to the alternative perspective, based on the work of Jordan Peterson, in which morality is grounded on evolved behavioral patterns. It is rationally coherent and strongly supported by evidence. The only ‘downside’ if you can call it that is that it turns out that morality is not based on rationality and the “ought from an is” problem is an accurate portrayal of our current (and maybe general) situation.
I am not going to expand on this unless you are interested but I have a question. What does the rationalist community in general, and your article, try to get at? I can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition
I do not see how [1] can be true since we can observe the emergence of moral values in cultures in which rationality is hardly developed. Furthermore, even today as your article shows, we are straggling to extract value from rational argument so, our intuition can not be stemming from something we haven’t even succeeded at. As for [2], it is a very interesting proposal but I haven’t seen any scientific evidence that link it to structures in the human brain.
I feel the rationalist community is resistant in entertaining the alternative because, if true, it would show that rationality is not the foundation of everything but a tool of assessing and manipulating. Maybe further resistance is caused because (in an slightly embarrassing turn of events) it brings stories, myth and religion into the picture again, albeit in a very different manner. But even if that proves to be the case so what? What is our highest priority here? Rationality or Truth?
I think many of us “rationalists” here would agree that rationality is a tool for assessing and manipulating reality. I would say much the same about morality. There’s not really a dichotomy between morality being “grounded on evolved behavioral patterns” and having “a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition”. Rather, the moral intuitions we have are computed in our brains, and the form of that computation is determined both by the selection pressures of evolution and the ways that our evolved brain structures interact with our various environments.
So what is our highest priority here? It’s neither Rationality nor Truth, but Morality in the broad sense—the somewhat arbitrary and largely incoherent set of states of reality that our moral intuition prefers. I say arbitrary because our moral intuition does not aim entirely at the optimization target of the evolutionary process that generated it—propagating our genes. Call that moral relativism if you want to.
There’s not really a dichotomy between morality being “grounded on evolved behavioral patterns” and having “a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition”.
There is a difference. Computing a moral axiom is not the same as encoding it. With computation the moral value would be an intrinsic property of some kind of mathematical structure. An encoding on the other hand is an implementation of an environmental adaptation as behavior based on selection pressure. It does not contain an implicit rational justification but it is objective in the sense of it being adapted to an external reality.
Moral value is not an “intrinsic property” of a mathematical structure—aliens couldn’t look at this mathematical structure and tell that it was morally important. And yet, whenever we compute something, there is a corresponding abstract structure. And when we reason about morality, we say that what is right wouldn’t change if you gave us brain surgery, so by morality we don’t mean “whatever we happen to think,” we mean that abstract structure.
Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality is because of evolution, culture, and chance, in that order. I’m not sure what the point is of calling this objective or not, but it definitely has reasons for being how it is. But maybe you can see how this evolved morality can also be talked about as an abstract structure, and therefore both of these paragraphs can be true at the same time.
It seems like you were looking for things with “intrinsic properties” and “objective”-ness that we don’t much care about, and maybe this is why the things you were thinking of were incompatible, but the things we’re thinking of are compatible.
Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality is because of evolution, culture, and chance, in that order. I’m not sure what the point is of calling this objective or not, but it definitely has reasons for being how it is.
It seems like you were looking for things with “intrinsic properties” and “objective”-ness that we don’t much care about..
I was making a comment on the specific points of dogiv but the discussion is about trying to discover if morality 1) has an objective basis or is completely relative and 2) it has a rational/computational basis or not. Is it that you don’t care about approaching truth on this matter, or that you believe you already know the answer?
In any case my main point is that Jordan Peterson’s perspective is (in my opinion) the most rational, cohesive and supported by evidence available and would love to see the community taking the time to study it, understand it and try to dispute it properly.
Nevertheless, I know not everyone has the time for that so If you expand on your perspective on this ‘abstract structure’ and its basis we can debate :)
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..
[3] Some mixture. Morality doesn’t have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
[3] Some mixture. Morality doesn’t have to be one thing, or achieved in one way.
Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it.
In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for ‘revisiting’ and deal with the strategies and pitfalls of the process.
Your comment seems to me an indication that you don’t understand what I am talking about. It is a complex subject and in order to formulate a coherent rational argument you will need to study it in some depth.
Cool. Peterson is much clearer than Jung (for which I don’t have a clear opinion). I am not claiming that everything that Peterson says is correct and I agree with. I am pointing to his argument for the basis of morality in cultural transmission through imitation, rituals, myth, stories etc. and the grounding of these structures in the evolutionary process as the best rational explanation of morality I have come across. I have studied it in depth and I believe it to be correct. I am inviting engagement with the argument instead of biased rejection.
I am pointing to his argument on our [communication] of moral values as cultural transmission through imitation, rituals, myth, stories etc. and the [indication of their correspondence with actual characteristics of reality] due to their development through the evolutionary process as the best rational explanation of morality I have come across.
And you should care because… you care about truth and also because, if true, you can put some attention to the wisdom traditions and their systems of knowledge.
The second set of brackets may be the disconnect. If “their” refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.
Rather than looking for value you can salvage from his work, or an ‘interpretation consistent with modern science,’ please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don’t have time for all of them.
If that still doesn’t help you see where I’m coming from, consider that reality is constantly changing and “the evolutionary process” usually happened in environments which no longer exist.
If “their” refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell.
Could you explain in a bit more detail please?
Rather than looking for value you can salvage from his work, or an ‘interpretation consistent with modern science,’ please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don’t have time for all of them.
No I do see where you are coming from and I don’t blame you at all. But do see that you are not addressing the actual argument, in its proper depth. My problem becomes one of convincing you to give your attention to it. Even then it would be difficult to accept an approach that is based on a kind of lateral thinking that requires you to be exposed to multiple patterns before they connect. It is a big problem that I alluded to when I wrote my post Too Much Effort | Too Little Evidence. Peterson is trying to create a rational bridge towards the importance of narrative structures so that they are approached with seriousness.
If that still doesn’t help you see where I’m coming from, consider that reality is constantly changing and “the evolutionary process” usually happened in environments which no longer exist.
This is addressed. The most archetypal stories are universal at all times and places. Other ones are modified according to time, place and people. Even the process and need of modification is encoded inside the stories themselves. These are extremely sophisticated systems.
As an analogy it does make sense but it seems to me more like an attempt for a kind of mental sleight of hand. The fit of the key to the lock is better seen as the description of the pattern matching mechanism that is the implementation of a value judgment. A judgment is more like having the option between two (or more) keys that open the same lock but have different consequences. The question is on what basis we choose the key, not how the key works once chosen.
I really don’t see how the analogy gives any evidence for [2] but please tell me if I am missing something!
It is extremely interesting to see the attempts of the community to justify through or extract values from rationality. I have been pointing to the alternative perspective, based on the work of Jordan Peterson, in which morality is grounded on evolved behavioral patterns. It is rationally coherent and strongly supported by evidence. The only ‘downside’ if you can call it that is that it turns out that morality is not based on rationality and the “ought from an is” problem is an accurate portrayal of our current (and maybe general) situation.
I am not going to expand on this unless you are interested but I have a question. What does the rationalist community in general, and your article, try to get at? I can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition
I do not see how [1] can be true since we can observe the emergence of moral values in cultures in which rationality is hardly developed. Furthermore, even today as your article shows, we are straggling to extract value from rational argument so, our intuition can not be stemming from something we haven’t even succeeded at. As for [2], it is a very interesting proposal but I haven’t seen any scientific evidence that link it to structures in the human brain.
I feel the rationalist community is resistant in entertaining the alternative because, if true, it would show that rationality is not the foundation of everything but a tool of assessing and manipulating. Maybe further resistance is caused because (in an slightly embarrassing turn of events) it brings stories, myth and religion into the picture again, albeit in a very different manner. But even if that proves to be the case so what? What is our highest priority here? Rationality or Truth?
I think many of us “rationalists” here would agree that rationality is a tool for assessing and manipulating reality. I would say much the same about morality. There’s not really a dichotomy between morality being “grounded on evolved behavioral patterns” and having “a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition”. Rather, the moral intuitions we have are computed in our brains, and the form of that computation is determined both by the selection pressures of evolution and the ways that our evolved brain structures interact with our various environments.
So what is our highest priority here? It’s neither Rationality nor Truth, but Morality in the broad sense—the somewhat arbitrary and largely incoherent set of states of reality that our moral intuition prefers. I say arbitrary because our moral intuition does not aim entirely at the optimization target of the evolutionary process that generated it—propagating our genes. Call that moral relativism if you want to.
There is a difference. Computing a moral axiom is not the same as encoding it. With computation the moral value would be an intrinsic property of some kind of mathematical structure. An encoding on the other hand is an implementation of an environmental adaptation as behavior based on selection pressure. It does not contain an implicit rational justification but it is objective in the sense of it being adapted to an external reality.
Moral value is not an “intrinsic property” of a mathematical structure—aliens couldn’t look at this mathematical structure and tell that it was morally important. And yet, whenever we compute something, there is a corresponding abstract structure. And when we reason about morality, we say that what is right wouldn’t change if you gave us brain surgery, so by morality we don’t mean “whatever we happen to think,” we mean that abstract structure.
Meanwhile, we are actual evolved mammals, and the reason we think what we do about morality is because of evolution, culture, and chance, in that order. I’m not sure what the point is of calling this objective or not, but it definitely has reasons for being how it is. But maybe you can see how this evolved morality can also be talked about as an abstract structure, and therefore both of these paragraphs can be true at the same time.
It seems like you were looking for things with “intrinsic properties” and “objective”-ness that we don’t much care about, and maybe this is why the things you were thinking of were incompatible, but the things we’re thinking of are compatible.
I was making a comment on the specific points of dogiv but the discussion is about trying to discover if morality 1) has an objective basis or is completely relative and 2) it has a rational/computational basis or not. Is it that you don’t care about approaching truth on this matter, or that you believe you already know the answer?
In any case my main point is that Jordan Peterson’s perspective is (in my opinion) the most rational, cohesive and supported by evidence available and would love to see the community taking the time to study it, understand it and try to dispute it properly.
Nevertheless, I know not everyone has the time for that so If you expand on your perspective on this ‘abstract structure’ and its basis we can debate :)
[3] Some mixture. Morality doesn’t have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it.
Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for ‘revisiting’ and deal with the strategies and pitfalls of the process.
That amounts to “I can make my theory work if I keep on adding epicycles”.
Your comment seems to me an indication that you don’t understand what I am talking about. It is a complex subject and in order to formulate a coherent rational argument you will need to study it in some depth.
I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.
Cool. Peterson is much clearer than Jung (for which I don’t have a clear opinion). I am not claiming that everything that Peterson says is correct and I agree with. I am pointing to his argument for the basis of morality in cultural transmission through imitation, rituals, myth, stories etc. and the grounding of these structures in the evolutionary process as the best rational explanation of morality I have come across. I have studied it in depth and I believe it to be correct. I am inviting engagement with the argument instead of biased rejection.
Without using terms such as “grounding” or “basis,” what are you saying and why should I care?
Good idea, let me try that.
I am pointing to his argument on our [communication] of moral values as cultural transmission through imitation, rituals, myth, stories etc. and the [indication of their correspondence with actual characteristics of reality] due to their development through the evolutionary process as the best rational explanation of morality I have come across.
And you should care because… you care about truth and also because, if true, you can put some attention to the wisdom traditions and their systems of knowledge.
The second set of brackets may be the disconnect. If “their” refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.
Rather than looking for value you can salvage from his work, or an ‘interpretation consistent with modern science,’ please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don’t have time for all of them.
If that still doesn’t help you see where I’m coming from, consider that reality is constantly changing and “the evolutionary process” usually happened in environments which no longer exist.
Could you explain in a bit more detail please?
No I do see where you are coming from and I don’t blame you at all. But do see that you are not addressing the actual argument, in its proper depth. My problem becomes one of convincing you to give your attention to it. Even then it would be difficult to accept an approach that is based on a kind of lateral thinking that requires you to be exposed to multiple patterns before they connect. It is a big problem that I alluded to when I wrote my post Too Much Effort | Too Little Evidence. Peterson is trying to create a rational bridge towards the importance of narrative structures so that they are approached with seriousness.
This is addressed. The most archetypal stories are universal at all times and places. Other ones are modified according to time, place and people. Even the process and need of modification is encoded inside the stories themselves. These are extremely sophisticated systems.
Closer to [2]. Does the analogy in Section 2 make sense to you? That would be my starting point for trying to explain further.
As an analogy it does make sense but it seems to me more like an attempt for a kind of mental sleight of hand. The fit of the key to the lock is better seen as the description of the pattern matching mechanism that is the implementation of a value judgment. A judgment is more like having the option between two (or more) keys that open the same lock but have different consequences. The question is on what basis we choose the key, not how the key works once chosen.
I really don’t see how the analogy gives any evidence for [2] but please tell me if I am missing something!