I already thought I should have made my position more clear to prevent confusion. Lesson learned: I really should have taken the effort.
The key word in my sentence is “concept”. I didn’t say the only source of learning things about morality is scanning the brain and understanding neurology. What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
Psychology is studying people’s behavior at a different “resolution” than neurology, but I’m certainly not saying that observation of human behavior is negligible when it comes to morality—quite the opposite. I meant to say, that our model of morality must be based on the true premise that morality applies to brains and neurology—not that neurology is the only valid tool in the toolbox to rationally figure out what is moral and what is not. I hope you catch my drift.
What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
This is incorrect in at least two ways.
First, models can be useful in practice even if they don’t incorporate reductionism even in principle. In fact, many useful models make explicit non-reductionist assumptions (as well as other assumptions that are known to be false from more exact and fundamental physical theories). Again, this is true for everything from the most mundane manual work to the most sophisticated technical work. Similarly, ideas about morality given by models that use various metaphysical fictions may well give you better answers on how to live in practice than any alternative model. You may disagree that this happens in practice, but you can’t demonstrate this just by dismissing them based on the fact that they make use of metaphysical fictions.
Second, it’s not at all clear whether a workable moral system for interactions between people is possible that doesn’t use metaphysical fictions. (By “workable moral system” I mean a model capable of giving practical answers to the questions, both public and private, on what to do and how to live.) You can dress these fictions in modern fashionable language so as to make them more difficult to pinpoint, but this only makes the arguments more confused and their fallacies more seductive. Personally, I’ll take honest and upfront talk about God’s commands and natural law any day over underhanded smuggling of metaphysical fictions by invoking, say, human rights or interpersonally comparable utilities. (And in fact, I have yet to see any sound argument that the latter, nowadays more fashionable sorts of models produce better answers in practice than those of the former, old-fashioned sort.)
First, models can be useful in practice even if they don’t incorporate reductionism even in principle.
True, but are such models really ->more<- useful—especially in the long run? If I’m a philosopher of morality and am not aware, that morality only applies to certain kinds of minds, which arise from certain kinds of brains… then my work would be akin to building a skycastle and obsessing about the color of the wallpapers, while being oblivious that the whole thing isn’t firmly grounded in reality, but floats midair. Of course that doesn’t mean that all of my concepts would be wrong, since perfectly normal common sense can carry someone a long way when it comes to moral behavior… but I may still be very susceptible to get other kinds of important questions dead wrong—like stem cells or abortion.
So while of course you’re right when you say that models can be very useful even if they are non-reductionist, I would maintain that there is a limit to the usefulness such simplistic models can reach, and that they can be surpassed by models that are better grounded in reality. In 50 years we may have to answer questions like: “is a simulated mind a real person to which we must apply our morality?” or “how should we treat this new genetically engineered species of animal?” I would predict giving answers to such questions could be simple, although not easily achieved by today’s standards: Look at their minds and see how they processes pain and pleasure and how these emotions relate to various other things going on in there and you’ll have your practical answer, without the need of pointless armchair-philosophy-battles based on false premises. We may encounter many moral issues of similar sorts in the upcoming years and we’ll be terribly unequipped to deal with them, if we don’t realize that they are reducible to tangible neural networks.
PS: Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is. How is a social contract or convention metaphysical, if you’ll find its content inside the brains of people or written down on artifacts? But I highly suspect that’s not the kind of human rights you’re talking about—nor the kind of human rights most people are talking about, when they use this term. So you probably accuse them rightly for treating human rights as if it was some kind of metaphysical concept.
Also I find it curious that you would prefer god-talk morality over certain philosophical concepts of morality—seeing how the latter would in principle be much more susceptible to our line of reasoning than the former. I prefer as little god-talk as possible.
True, but are such models really ->more<- useful—especially in the long run?
Of course they are more useful. You have only finite computational power, and often any models that are tractable must be simplified at the expense of capturing fundamental reality. Even if that’s not an issue, insisting on a more exact model beyond what’s good enough in practice only introduces additional cost and error-proneness.
Now, you are of course right that problems that may await us in the future, such as e.g. the moral status of artificial minds, are hopelessly beyond the scope of any traditional moral/ethical intuitions and models, and require getting down to the fundamentals if we are to get any sensible answers at all. However, in this discussion, I have in mind much more mundane everyday practical questions of how to live your life and deal with people. When it comes to these, traditional models and intuitions that have evolved naturally (in both the biological and cultural sense) normally beat any attempts at second-guessing them. That’s at least from my experience and observations.
Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is.
Fundamentally, they aren’t. The normal human modus operandi for resolving disputes is to postulate some metaphysical entities about whose nature everyone largely agrees, and use the recognized characteristics of these metaphysical entities as Schelling points for agreement. This gives a great practical flexibility to norms, since a disagreement about them can be (hopefully) channeled into a metaphysical debate about these entities, and the outcome of this debate is then used as the conclusive Schelling point, avoiding violent conflict.
From this perspective, there is no essential difference between ancient religious debates over what God’s will is in some dispute and the modern debates over what is compatible with “human rights”—or any legal procedure beyond fact-finding, for that matter. All of these can be seen as rhetorical contests in metaphysical debates aimed at establishing and stabilizing more concrete Schelling points within some existing general metaphysical framework. (As for utilitarianism, here we get to another important criticism of it: conclusions of utilitarian arguments typically make for very poor Schelling points in practice, for all sorts of reasons.)
Of course, these systems can work better or worse in practice, and they can break down in all sorts of nasty ways. The important point is that human disputes will be resolved either violently or by such metaphysical debates, and the existing frameworks for these debates should be judged on the practical quality of the network of Schelling points they provide—not on how convincingly they obfuscate the unavoidable metaphysical nature of the entities they postulate. From this perspective, you might well prefer God-talk in some situations for purely practical reasons.
I already thought I should have made my position more clear to prevent confusion. Lesson learned: I really should have taken the effort.
The key word in my sentence is “concept”. I didn’t say the only source of learning things about morality is scanning the brain and understanding neurology. What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
Psychology is studying people’s behavior at a different “resolution” than neurology, but I’m certainly not saying that observation of human behavior is negligible when it comes to morality—quite the opposite. I meant to say, that our model of morality must be based on the true premise that morality applies to brains and neurology—not that neurology is the only valid tool in the toolbox to rationally figure out what is moral and what is not. I hope you catch my drift.
This is incorrect in at least two ways.
First, models can be useful in practice even if they don’t incorporate reductionism even in principle. In fact, many useful models make explicit non-reductionist assumptions (as well as other assumptions that are known to be false from more exact and fundamental physical theories). Again, this is true for everything from the most mundane manual work to the most sophisticated technical work. Similarly, ideas about morality given by models that use various metaphysical fictions may well give you better answers on how to live in practice than any alternative model. You may disagree that this happens in practice, but you can’t demonstrate this just by dismissing them based on the fact that they make use of metaphysical fictions.
Second, it’s not at all clear whether a workable moral system for interactions between people is possible that doesn’t use metaphysical fictions. (By “workable moral system” I mean a model capable of giving practical answers to the questions, both public and private, on what to do and how to live.) You can dress these fictions in modern fashionable language so as to make them more difficult to pinpoint, but this only makes the arguments more confused and their fallacies more seductive. Personally, I’ll take honest and upfront talk about God’s commands and natural law any day over underhanded smuggling of metaphysical fictions by invoking, say, human rights or interpersonally comparable utilities. (And in fact, I have yet to see any sound argument that the latter, nowadays more fashionable sorts of models produce better answers in practice than those of the former, old-fashioned sort.)
True, but are such models really ->more<- useful—especially in the long run? If I’m a philosopher of morality and am not aware, that morality only applies to certain kinds of minds, which arise from certain kinds of brains… then my work would be akin to building a skycastle and obsessing about the color of the wallpapers, while being oblivious that the whole thing isn’t firmly grounded in reality, but floats midair. Of course that doesn’t mean that all of my concepts would be wrong, since perfectly normal common sense can carry someone a long way when it comes to moral behavior… but I may still be very susceptible to get other kinds of important questions dead wrong—like stem cells or abortion.
So while of course you’re right when you say that models can be very useful even if they are non-reductionist, I would maintain that there is a limit to the usefulness such simplistic models can reach, and that they can be surpassed by models that are better grounded in reality. In 50 years we may have to answer questions like: “is a simulated mind a real person to which we must apply our morality?” or “how should we treat this new genetically engineered species of animal?” I would predict giving answers to such questions could be simple, although not easily achieved by today’s standards: Look at their minds and see how they processes pain and pleasure and how these emotions relate to various other things going on in there and you’ll have your practical answer, without the need of pointless armchair-philosophy-battles based on false premises. We may encounter many moral issues of similar sorts in the upcoming years and we’ll be terribly unequipped to deal with them, if we don’t realize that they are reducible to tangible neural networks.
PS: Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is. How is a social contract or convention metaphysical, if you’ll find its content inside the brains of people or written down on artifacts? But I highly suspect that’s not the kind of human rights you’re talking about—nor the kind of human rights most people are talking about, when they use this term. So you probably accuse them rightly for treating human rights as if it was some kind of metaphysical concept.
Also I find it curious that you would prefer god-talk morality over certain philosophical concepts of morality—seeing how the latter would in principle be much more susceptible to our line of reasoning than the former. I prefer as little god-talk as possible.
Of course they are more useful. You have only finite computational power, and often any models that are tractable must be simplified at the expense of capturing fundamental reality. Even if that’s not an issue, insisting on a more exact model beyond what’s good enough in practice only introduces additional cost and error-proneness.
Now, you are of course right that problems that may await us in the future, such as e.g. the moral status of artificial minds, are hopelessly beyond the scope of any traditional moral/ethical intuitions and models, and require getting down to the fundamentals if we are to get any sensible answers at all. However, in this discussion, I have in mind much more mundane everyday practical questions of how to live your life and deal with people. When it comes to these, traditional models and intuitions that have evolved naturally (in both the biological and cultural sense) normally beat any attempts at second-guessing them. That’s at least from my experience and observations.
Fundamentally, they aren’t. The normal human modus operandi for resolving disputes is to postulate some metaphysical entities about whose nature everyone largely agrees, and use the recognized characteristics of these metaphysical entities as Schelling points for agreement. This gives a great practical flexibility to norms, since a disagreement about them can be (hopefully) channeled into a metaphysical debate about these entities, and the outcome of this debate is then used as the conclusive Schelling point, avoiding violent conflict.
From this perspective, there is no essential difference between ancient religious debates over what God’s will is in some dispute and the modern debates over what is compatible with “human rights”—or any legal procedure beyond fact-finding, for that matter. All of these can be seen as rhetorical contests in metaphysical debates aimed at establishing and stabilizing more concrete Schelling points within some existing general metaphysical framework. (As for utilitarianism, here we get to another important criticism of it: conclusions of utilitarian arguments typically make for very poor Schelling points in practice, for all sorts of reasons.)
Of course, these systems can work better or worse in practice, and they can break down in all sorts of nasty ways. The important point is that human disputes will be resolved either violently or by such metaphysical debates, and the existing frameworks for these debates should be judged on the practical quality of the network of Schelling points they provide—not on how convincingly they obfuscate the unavoidable metaphysical nature of the entities they postulate. From this perspective, you might well prefer God-talk in some situations for purely practical reasons.