People only want to hear what they like to hear and I almost can’t help myself but to provoke and startle, if given the opportunity.
I’ve known people like that. From what I’ve seen, for many people it’s a game to make social interaction more interesting. I play it very poorly (for example, I almost never get sarcasm unless it’s pointed out to me, and even if I do, I’m usually too lazy to come up with something sarcastic to say in return, so I just ignore it, which is awfully boring for the person being sarcastic.) Is this why you do this?
I’m sure you can spin things in such a way, that will enable you to much more easily and effectively convert people into rationalists, than any confrontational hardliner could hope to accomplish in comparison.
I am very good at engaging in dialogue with just about anybody and presenting my points in such a way that it’s natural for them to agree. I think the most important component is making it obvious to people that “I don’t dislike you because you disagree with me; if anything, I like the fact that we disagree, because maybe I can learn something new from you.” Even confrontational people usually respond well to that kind of attitude, and it’s a win-win situation because I get to engage in the discussion that I want.
“representatives of the church attend a meeting with politics and industry to discuss the future of nuclear energy”—as if a pastor or bishop actually knows anything about anything. (Let alone something about morality and ethics).
I disagree. After spending some very formative years of my adolescence singing in the church choir, I’ve found that the ministers do seem to...well, maybe know more about morality isn’t the right phrase, but they’ve thought about it more. A large percentage of the population never thinks about morality. Some because they just live their life without really questioning anything (like at least 50% of my fellow pool staff), some because they base their values off selfishness and don’t want to have to change it. The Church morality has its flaws, for example the implicit biblical attitudes towards sex before marriage, women, homosexuality, etc. But in the Anglican church anyway, and even in the more conservative Pentecostal church, I know hardly any actual Christians who believe that someone is inherently bad for being homosexual. There are a lot of “good” memes in the Christian morality complex, ideas of being radically generous and loving your enemies. I have seen some incredible acts of generosity in the Pentecostal church especially.
There’s also a sample bias in the kind of people who become ministers, especially Anglican ministers (this branch of Christianity is already extremely liberal; they do gay marriages and everything.) They tend to be fairly intellectual, i.e. introspective and likely to meditate on moral principles, and they tend to already like people and want to help them. And they spend years studying the material. Thus, compared to Joe Smith who works at the movie theater, I think most pastors do know more about morality and ethics. Of course, someone who’s put in the same amount of time thinking about it but isn’t limited to agreeing with a book written two thousand years ago is still more likely to be right, but I don’t know that many people like that firsthand.
Partially perhaps, but it’s hardly the main reason. Language nearly always carries with it a frequency that conveys social status and a lot of talk and argument isn’t much more than a renegotiation or affirmation of the social contract between people. So quite a lot of the actual content of any given typical conversation you’re likely to hear is quite braindead and only superficially appears to be civilized. That kind of smalltalk is boring if it’s transparent to you, and controversy spices things up for sure—so yes, there may be something to it...
But I think the ultimate reason for being provocative is because “the truth” simply is quite provoking and startling by itself, given the typical nonrational worldviews people hold. If people were rational by nature and roughly on the same page as most lesswrongers, I certainly wouldn’t feel like making an effort to provoke or piss people off just for the sake of disagreement. I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I’m often not terribly concerned about sounding agreeable. Sometimes I make an effort if I find it important to actually convince someone, but naturally I just feel like censoring my opinions as little as necessary. (Which is not to say that my approach is in any way all that commendable, it just simple feels natural to me—it’s in a way my mental pathway of least resistance and conscious effort.)
I’m not doing it all the time of course, I can be quite agreeable when I happen to feel like it—but overall it’s just not my regular state of being.
″...as if a pastor or bishop actually knows anything about anything. (Let alone something about morality and ethics).”
I disagree.
You can’t be serious, how dare you trample on my beliefs and hurt my feelings like that? ;)
...well, maybe know more about morality isn’t the right phrase, but they’ve [theologans] thought about it more.
Sure, and conspiracy theorists think a lot about 9/11 as well. The amount of thought people spend on any conceivable subject is at best very dimly (and usually not at all) correlated with the quality/truthfulness of their conclusions, if the “mental algorithm” by which they structure their thoughts is semi-worthless by virtue of being irrational (aka. out of step with reality).
Trying to think about morality without the concept that morality must exclusively relate to the neurological makeup of conscious brains is damn close to a waste of time. It’s like trying to wrap your head around biology without the concept of evolution—it cannot be done. You may learn certain things nonetheless, but whatever model you come up with—it will be a completely confused mess. Whatever theology may come up with on the subject of morality is at best right by accident and frequently enough it’s positively primitive, wrong and harmful—either way it’s a complete waste of time and thought given the rational alternatives (neurology,psychology) we can employ to discover true concepts about morality.
What religion has to say about morality is in the same category as what science and philosophy had to say about life and biology before Darwin and Wallace came along—which in retrospect amounts pretty much to “next to nothing of interest”.
So are all those Anglican priests nice and moral people? Sure, whatever. But do they have any real competence whatsoever to make decisions about moral issues (let alone things like nuclear power)? Hell no.
Trying to think about morality without the concept that morality must exclusively relate to the neurological makeup of conscious brains is damn close to a waste of time.
That’s like saying that the job of a sports coach is a waste of time because he is clueless about physics. If it were impossible to gain useful insights and intuitions about the world without reducing everything to first principles, nothing would ever get done. On the contrary, in the overwhelming majority of cases where humans successfully grapple with the real world, from the most basic everyday actions to the most complex technological achievements, it’s done using models and intuitions that are, as the saying goes, wrong but useful.
So, if you’re looking for concrete answers to the basic questions of how to live, it’s a bad idea to discard wisdom from the past just because it’s based on models of the world to which we now have fundamentally more accurate ones. A model that captures fundamental reality more closely doesn’t automatically translate to superior practical insight. Otherwise people who want to learn to play tennis would be hiring physicists to teach them.
Friendly-HI didn’t want to suggest that you actually have to perform the reduction to be any good. Just that you keep in mind that there’s nothing fundamentally irreducible there. I was about to add more details but Friendly-HI already did.
Trying to think about morality without the concept that morality must exclusively relate to the neurological makeup of conscious brains is damn close to a waste of time.
This seems mistaken, especially considering that we’re just getting started on the neurology.
I’d say that trying to think about morality without careful observation of what changes people can make and how is a waste of time.
I already thought I should have made my position more clear to prevent confusion. Lesson learned: I really should have taken the effort.
The key word in my sentence is “concept”. I didn’t say the only source of learning things about morality is scanning the brain and understanding neurology. What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
Psychology is studying people’s behavior at a different “resolution” than neurology, but I’m certainly not saying that observation of human behavior is negligible when it comes to morality—quite the opposite. I meant to say, that our model of morality must be based on the true premise that morality applies to brains and neurology—not that neurology is the only valid tool in the toolbox to rationally figure out what is moral and what is not. I hope you catch my drift.
What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
This is incorrect in at least two ways.
First, models can be useful in practice even if they don’t incorporate reductionism even in principle. In fact, many useful models make explicit non-reductionist assumptions (as well as other assumptions that are known to be false from more exact and fundamental physical theories). Again, this is true for everything from the most mundane manual work to the most sophisticated technical work. Similarly, ideas about morality given by models that use various metaphysical fictions may well give you better answers on how to live in practice than any alternative model. You may disagree that this happens in practice, but you can’t demonstrate this just by dismissing them based on the fact that they make use of metaphysical fictions.
Second, it’s not at all clear whether a workable moral system for interactions between people is possible that doesn’t use metaphysical fictions. (By “workable moral system” I mean a model capable of giving practical answers to the questions, both public and private, on what to do and how to live.) You can dress these fictions in modern fashionable language so as to make them more difficult to pinpoint, but this only makes the arguments more confused and their fallacies more seductive. Personally, I’ll take honest and upfront talk about God’s commands and natural law any day over underhanded smuggling of metaphysical fictions by invoking, say, human rights or interpersonally comparable utilities. (And in fact, I have yet to see any sound argument that the latter, nowadays more fashionable sorts of models produce better answers in practice than those of the former, old-fashioned sort.)
First, models can be useful in practice even if they don’t incorporate reductionism even in principle.
True, but are such models really ->more<- useful—especially in the long run? If I’m a philosopher of morality and am not aware, that morality only applies to certain kinds of minds, which arise from certain kinds of brains… then my work would be akin to building a skycastle and obsessing about the color of the wallpapers, while being oblivious that the whole thing isn’t firmly grounded in reality, but floats midair. Of course that doesn’t mean that all of my concepts would be wrong, since perfectly normal common sense can carry someone a long way when it comes to moral behavior… but I may still be very susceptible to get other kinds of important questions dead wrong—like stem cells or abortion.
So while of course you’re right when you say that models can be very useful even if they are non-reductionist, I would maintain that there is a limit to the usefulness such simplistic models can reach, and that they can be surpassed by models that are better grounded in reality. In 50 years we may have to answer questions like: “is a simulated mind a real person to which we must apply our morality?” or “how should we treat this new genetically engineered species of animal?” I would predict giving answers to such questions could be simple, although not easily achieved by today’s standards: Look at their minds and see how they processes pain and pleasure and how these emotions relate to various other things going on in there and you’ll have your practical answer, without the need of pointless armchair-philosophy-battles based on false premises. We may encounter many moral issues of similar sorts in the upcoming years and we’ll be terribly unequipped to deal with them, if we don’t realize that they are reducible to tangible neural networks.
PS: Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is. How is a social contract or convention metaphysical, if you’ll find its content inside the brains of people or written down on artifacts? But I highly suspect that’s not the kind of human rights you’re talking about—nor the kind of human rights most people are talking about, when they use this term. So you probably accuse them rightly for treating human rights as if it was some kind of metaphysical concept.
Also I find it curious that you would prefer god-talk morality over certain philosophical concepts of morality—seeing how the latter would in principle be much more susceptible to our line of reasoning than the former. I prefer as little god-talk as possible.
True, but are such models really ->more<- useful—especially in the long run?
Of course they are more useful. You have only finite computational power, and often any models that are tractable must be simplified at the expense of capturing fundamental reality. Even if that’s not an issue, insisting on a more exact model beyond what’s good enough in practice only introduces additional cost and error-proneness.
Now, you are of course right that problems that may await us in the future, such as e.g. the moral status of artificial minds, are hopelessly beyond the scope of any traditional moral/ethical intuitions and models, and require getting down to the fundamentals if we are to get any sensible answers at all. However, in this discussion, I have in mind much more mundane everyday practical questions of how to live your life and deal with people. When it comes to these, traditional models and intuitions that have evolved naturally (in both the biological and cultural sense) normally beat any attempts at second-guessing them. That’s at least from my experience and observations.
Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is.
Fundamentally, they aren’t. The normal human modus operandi for resolving disputes is to postulate some metaphysical entities about whose nature everyone largely agrees, and use the recognized characteristics of these metaphysical entities as Schelling points for agreement. This gives a great practical flexibility to norms, since a disagreement about them can be (hopefully) channeled into a metaphysical debate about these entities, and the outcome of this debate is then used as the conclusive Schelling point, avoiding violent conflict.
From this perspective, there is no essential difference between ancient religious debates over what God’s will is in some dispute and the modern debates over what is compatible with “human rights”—or any legal procedure beyond fact-finding, for that matter. All of these can be seen as rhetorical contests in metaphysical debates aimed at establishing and stabilizing more concrete Schelling points within some existing general metaphysical framework. (As for utilitarianism, here we get to another important criticism of it: conclusions of utilitarian arguments typically make for very poor Schelling points in practice, for all sorts of reasons.)
Of course, these systems can work better or worse in practice, and they can break down in all sorts of nasty ways. The important point is that human disputes will be resolved either violently or by such metaphysical debates, and the existing frameworks for these debates should be judged on the practical quality of the network of Schelling points they provide—not on how convincingly they obfuscate the unavoidable metaphysical nature of the entities they postulate. From this perspective, you might well prefer God-talk in some situations for purely practical reasons.
given the rational alternatives (neurology,psychology) we can employ to discover true concepts about morality.
I’m with you most of the way. On the rational alternatives though, I’m not sure what you suggest works in the way we might imagine.
Neurology and psychology can provide a factual/ontological description of how humans manifest morality. They don’t give a description of what morality should be.
There’s a deontological kernel to morality, it’s about what we think people should do, not what they do do.
Psychology etc. can give great insights into choosing morals that go with the human grain. But those choices are primarily motivated by pragmatism rather than vitue. The virtue you’ve chosen is to be pragmatic…
Happy to be proven wrong here, but in terms of what virtues we place value on, I think there’s going to be an element of arbitrariness in their choice.
The question “what do we think people should do?” is a question about what we think. Thus the relevance of psychology. Note that this is different from “what should people do?” being itself about what we think. But if you want to find out “what should people do?” half the work is pretty much done for you if you can figure out where this “should” idea in your brain is coming from, and what it means.
I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I’m often not terribly concerned about sounding agreeable.
Can you clarify this statement? As phrased, it doesn’t quite mesh with the rest of your self-description. If you truly did not care about what other people thought, it wouldn’t bother you that they think untrue things. A more precise formulation would be that you assign little or no value to untrue beliefs. Furthermore, you assign very little value to any emotions that for the person are bound up in their holding that belief.
The untrue belief and the attached emotions are not the same thing, though they are obviously related. It does not follow from “untrue beliefs deserve little respect” that “emotions attached to untrue beliefs deserve little respect”. The emotions are real after all.
If you truly did not care about what other people thought
vs.
I care comparatively less about what people think
You’re right about the emotions part, but I’m certainly not bashing people as hard as Dr. House and I’m also not gonna take nice delusions of heaven away from poor old granny. Yes, of cause I too care about the emotions of people, depending on the person and the specific circumstances.
I’m also usually not the one to open up the conversation on the kind of topics we discuss here, but if people share their opinion I’ll often throw my weight in and voice my unusual opinions without too much concern about tiptoeing around sensibilities of -say- the political, religious or the new age types.
Of cause I’m not claiming to be a total hardliner, deep within my brain there is such a thing as a calculation taking place about whether or not giving my real opinion to person X Y and Z will result in too much damage for me, others, or our relationship… it’s just that I’m less inclined to be agreeable in comparison with others. I’m not claiming to be brain damaged after all, of cause I care as well to some (considerably less than average) extent about social repercussions.
Addendum: Agreeableness is also something that is known to rise with progressing age, so it’s likely that I will become more agreeable over time, seeing how I’m still just 23. Another factor in agreeableness is impulsiveness, which thankfully diminishes with age—and I’m a fairly impulsive person. Agreeableness isn’t just composed of “one thing”, it’s the result of several interactions.
Addendum: Agreeableness is also something that is known to rise with progressing age, so it’s likely that I will become more agreeable over time, seeing how I’m still just 23. Another factor in agreeableness is impulsiveness, with also diminishes with age—and I’m a fairly impulsive person.
I’m 19, and I’m already one of the most agreeable and least impulsive people I know. I’m fucked...
No way! There’s a possibility I wouldn’t be able to keep everyone happy all the time! There’s a possibility people would dislike me for policies I implemented! It would be WAY too stressful!
I’ve known people like that. From what I’ve seen, for many people it’s a game to make social interaction more interesting. I play it very poorly (for example, I almost never get sarcasm unless it’s pointed out to me, and even if I do, I’m usually too lazy to come up with something sarcastic to say in return, so I just ignore it, which is awfully boring for the person being sarcastic.) Is this why you do this?
I am very good at engaging in dialogue with just about anybody and presenting my points in such a way that it’s natural for them to agree. I think the most important component is making it obvious to people that “I don’t dislike you because you disagree with me; if anything, I like the fact that we disagree, because maybe I can learn something new from you.” Even confrontational people usually respond well to that kind of attitude, and it’s a win-win situation because I get to engage in the discussion that I want.
I disagree. After spending some very formative years of my adolescence singing in the church choir, I’ve found that the ministers do seem to...well, maybe know more about morality isn’t the right phrase, but they’ve thought about it more. A large percentage of the population never thinks about morality. Some because they just live their life without really questioning anything (like at least 50% of my fellow pool staff), some because they base their values off selfishness and don’t want to have to change it. The Church morality has its flaws, for example the implicit biblical attitudes towards sex before marriage, women, homosexuality, etc. But in the Anglican church anyway, and even in the more conservative Pentecostal church, I know hardly any actual Christians who believe that someone is inherently bad for being homosexual. There are a lot of “good” memes in the Christian morality complex, ideas of being radically generous and loving your enemies. I have seen some incredible acts of generosity in the Pentecostal church especially.
There’s also a sample bias in the kind of people who become ministers, especially Anglican ministers (this branch of Christianity is already extremely liberal; they do gay marriages and everything.) They tend to be fairly intellectual, i.e. introspective and likely to meditate on moral principles, and they tend to already like people and want to help them. And they spend years studying the material. Thus, compared to Joe Smith who works at the movie theater, I think most pastors do know more about morality and ethics. Of course, someone who’s put in the same amount of time thinking about it but isn’t limited to agreeing with a book written two thousand years ago is still more likely to be right, but I don’t know that many people like that firsthand.
Partially perhaps, but it’s hardly the main reason. Language nearly always carries with it a frequency that conveys social status and a lot of talk and argument isn’t much more than a renegotiation or affirmation of the social contract between people. So quite a lot of the actual content of any given typical conversation you’re likely to hear is quite braindead and only superficially appears to be civilized. That kind of smalltalk is boring if it’s transparent to you, and controversy spices things up for sure—so yes, there may be something to it...
But I think the ultimate reason for being provocative is because “the truth” simply is quite provoking and startling by itself, given the typical nonrational worldviews people hold. If people were rational by nature and roughly on the same page as most lesswrongers, I certainly wouldn’t feel like making an effort to provoke or piss people off just for the sake of disagreement. I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I’m often not terribly concerned about sounding agreeable. Sometimes I make an effort if I find it important to actually convince someone, but naturally I just feel like censoring my opinions as little as necessary. (Which is not to say that my approach is in any way all that commendable, it just simple feels natural to me—it’s in a way my mental pathway of least resistance and conscious effort.)
I’m not doing it all the time of course, I can be quite agreeable when I happen to feel like it—but overall it’s just not my regular state of being.
″...as if a pastor or bishop actually knows anything about anything. (Let alone something about morality and ethics).”
You can’t be serious, how dare you trample on my beliefs and hurt my feelings like that? ;)
Sure, and conspiracy theorists think a lot about 9/11 as well. The amount of thought people spend on any conceivable subject is at best very dimly (and usually not at all) correlated with the quality/truthfulness of their conclusions, if the “mental algorithm” by which they structure their thoughts is semi-worthless by virtue of being irrational (aka. out of step with reality).
Trying to think about morality without the concept that morality must exclusively relate to the neurological makeup of conscious brains is damn close to a waste of time. It’s like trying to wrap your head around biology without the concept of evolution—it cannot be done. You may learn certain things nonetheless, but whatever model you come up with—it will be a completely confused mess. Whatever theology may come up with on the subject of morality is at best right by accident and frequently enough it’s positively primitive, wrong and harmful—either way it’s a complete waste of time and thought given the rational alternatives (neurology,psychology) we can employ to discover true concepts about morality.
What religion has to say about morality is in the same category as what science and philosophy had to say about life and biology before Darwin and Wallace came along—which in retrospect amounts pretty much to “next to nothing of interest”.
So are all those Anglican priests nice and moral people? Sure, whatever. But do they have any real competence whatsoever to make decisions about moral issues (let alone things like nuclear power)? Hell no.
That’s like saying that the job of a sports coach is a waste of time because he is clueless about physics. If it were impossible to gain useful insights and intuitions about the world without reducing everything to first principles, nothing would ever get done. On the contrary, in the overwhelming majority of cases where humans successfully grapple with the real world, from the most basic everyday actions to the most complex technological achievements, it’s done using models and intuitions that are, as the saying goes, wrong but useful.
So, if you’re looking for concrete answers to the basic questions of how to live, it’s a bad idea to discard wisdom from the past just because it’s based on models of the world to which we now have fundamentally more accurate ones. A model that captures fundamental reality more closely doesn’t automatically translate to superior practical insight. Otherwise people who want to learn to play tennis would be hiring physicists to teach them.
Friendly-HI didn’t want to suggest that you actually have to perform the reduction to be any good. Just that you keep in mind that there’s nothing fundamentally irreducible there. I was about to add more details but Friendly-HI already did.
^ what he said
You can quote a paragraph by preceding it with > (or multiple angle brackets to nest quotes deeper).
thx. Old habits die hard.
This seems mistaken, especially considering that we’re just getting started on the neurology.
I’d say that trying to think about morality without careful observation of what changes people can make and how is a waste of time.
I already thought I should have made my position more clear to prevent confusion. Lesson learned: I really should have taken the effort.
The key word in my sentence is “concept”. I didn’t say the only source of learning things about morality is scanning the brain and understanding neurology. What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
Psychology is studying people’s behavior at a different “resolution” than neurology, but I’m certainly not saying that observation of human behavior is negligible when it comes to morality—quite the opposite. I meant to say, that our model of morality must be based on the true premise that morality applies to brains and neurology—not that neurology is the only valid tool in the toolbox to rationally figure out what is moral and what is not. I hope you catch my drift.
This is incorrect in at least two ways.
First, models can be useful in practice even if they don’t incorporate reductionism even in principle. In fact, many useful models make explicit non-reductionist assumptions (as well as other assumptions that are known to be false from more exact and fundamental physical theories). Again, this is true for everything from the most mundane manual work to the most sophisticated technical work. Similarly, ideas about morality given by models that use various metaphysical fictions may well give you better answers on how to live in practice than any alternative model. You may disagree that this happens in practice, but you can’t demonstrate this just by dismissing them based on the fact that they make use of metaphysical fictions.
Second, it’s not at all clear whether a workable moral system for interactions between people is possible that doesn’t use metaphysical fictions. (By “workable moral system” I mean a model capable of giving practical answers to the questions, both public and private, on what to do and how to live.) You can dress these fictions in modern fashionable language so as to make them more difficult to pinpoint, but this only makes the arguments more confused and their fallacies more seductive. Personally, I’ll take honest and upfront talk about God’s commands and natural law any day over underhanded smuggling of metaphysical fictions by invoking, say, human rights or interpersonally comparable utilities. (And in fact, I have yet to see any sound argument that the latter, nowadays more fashionable sorts of models produce better answers in practice than those of the former, old-fashioned sort.)
True, but are such models really ->more<- useful—especially in the long run? If I’m a philosopher of morality and am not aware, that morality only applies to certain kinds of minds, which arise from certain kinds of brains… then my work would be akin to building a skycastle and obsessing about the color of the wallpapers, while being oblivious that the whole thing isn’t firmly grounded in reality, but floats midair. Of course that doesn’t mean that all of my concepts would be wrong, since perfectly normal common sense can carry someone a long way when it comes to moral behavior… but I may still be very susceptible to get other kinds of important questions dead wrong—like stem cells or abortion.
So while of course you’re right when you say that models can be very useful even if they are non-reductionist, I would maintain that there is a limit to the usefulness such simplistic models can reach, and that they can be surpassed by models that are better grounded in reality. In 50 years we may have to answer questions like: “is a simulated mind a real person to which we must apply our morality?” or “how should we treat this new genetically engineered species of animal?” I would predict giving answers to such questions could be simple, although not easily achieved by today’s standards: Look at their minds and see how they processes pain and pleasure and how these emotions relate to various other things going on in there and you’ll have your practical answer, without the need of pointless armchair-philosophy-battles based on false premises. We may encounter many moral issues of similar sorts in the upcoming years and we’ll be terribly unequipped to deal with them, if we don’t realize that they are reducible to tangible neural networks.
PS: Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is. How is a social contract or convention metaphysical, if you’ll find its content inside the brains of people or written down on artifacts? But I highly suspect that’s not the kind of human rights you’re talking about—nor the kind of human rights most people are talking about, when they use this term. So you probably accuse them rightly for treating human rights as if it was some kind of metaphysical concept.
Also I find it curious that you would prefer god-talk morality over certain philosophical concepts of morality—seeing how the latter would in principle be much more susceptible to our line of reasoning than the former. I prefer as little god-talk as possible.
Of course they are more useful. You have only finite computational power, and often any models that are tractable must be simplified at the expense of capturing fundamental reality. Even if that’s not an issue, insisting on a more exact model beyond what’s good enough in practice only introduces additional cost and error-proneness.
Now, you are of course right that problems that may await us in the future, such as e.g. the moral status of artificial minds, are hopelessly beyond the scope of any traditional moral/ethical intuitions and models, and require getting down to the fundamentals if we are to get any sensible answers at all. However, in this discussion, I have in mind much more mundane everyday practical questions of how to live your life and deal with people. When it comes to these, traditional models and intuitions that have evolved naturally (in both the biological and cultural sense) normally beat any attempts at second-guessing them. That’s at least from my experience and observations.
Fundamentally, they aren’t. The normal human modus operandi for resolving disputes is to postulate some metaphysical entities about whose nature everyone largely agrees, and use the recognized characteristics of these metaphysical entities as Schelling points for agreement. This gives a great practical flexibility to norms, since a disagreement about them can be (hopefully) channeled into a metaphysical debate about these entities, and the outcome of this debate is then used as the conclusive Schelling point, avoiding violent conflict.
From this perspective, there is no essential difference between ancient religious debates over what God’s will is in some dispute and the modern debates over what is compatible with “human rights”—or any legal procedure beyond fact-finding, for that matter. All of these can be seen as rhetorical contests in metaphysical debates aimed at establishing and stabilizing more concrete Schelling points within some existing general metaphysical framework. (As for utilitarianism, here we get to another important criticism of it: conclusions of utilitarian arguments typically make for very poor Schelling points in practice, for all sorts of reasons.)
Of course, these systems can work better or worse in practice, and they can break down in all sorts of nasty ways. The important point is that human disputes will be resolved either violently or by such metaphysical debates, and the existing frameworks for these debates should be judged on the practical quality of the network of Schelling points they provide—not on how convincingly they obfuscate the unavoidable metaphysical nature of the entities they postulate. From this perspective, you might well prefer God-talk in some situations for purely practical reasons.
I’m with you most of the way. On the rational alternatives though, I’m not sure what you suggest works in the way we might imagine.
Neurology and psychology can provide a factual/ontological description of how humans manifest morality. They don’t give a description of what morality should be.
There’s a deontological kernel to morality, it’s about what we think people should do, not what they do do.
Psychology etc. can give great insights into choosing morals that go with the human grain. But those choices are primarily motivated by pragmatism rather than vitue. The virtue you’ve chosen is to be pragmatic…
Happy to be proven wrong here, but in terms of what virtues we place value on, I think there’s going to be an element of arbitrariness in their choice.
The question “what do we think people should do?” is a question about what we think. Thus the relevance of psychology. Note that this is different from “what should people do?” being itself about what we think. But if you want to find out “what should people do?” half the work is pretty much done for you if you can figure out where this “should” idea in your brain is coming from, and what it means.
Can you clarify this statement? As phrased, it doesn’t quite mesh with the rest of your self-description. If you truly did not care about what other people thought, it wouldn’t bother you that they think untrue things. A more precise formulation would be that you assign little or no value to untrue beliefs. Furthermore, you assign very little value to any emotions that for the person are bound up in their holding that belief.
The untrue belief and the attached emotions are not the same thing, though they are obviously related. It does not follow from “untrue beliefs deserve little respect” that “emotions attached to untrue beliefs deserve little respect”. The emotions are real after all.
vs.
You’re right about the emotions part, but I’m certainly not bashing people as hard as Dr. House and I’m also not gonna take nice delusions of heaven away from poor old granny. Yes, of cause I too care about the emotions of people, depending on the person and the specific circumstances.
I’m also usually not the one to open up the conversation on the kind of topics we discuss here, but if people share their opinion I’ll often throw my weight in and voice my unusual opinions without too much concern about tiptoeing around sensibilities of -say- the political, religious or the new age types.
Of cause I’m not claiming to be a total hardliner, deep within my brain there is such a thing as a calculation taking place about whether or not giving my real opinion to person X Y and Z will result in too much damage for me, others, or our relationship… it’s just that I’m less inclined to be agreeable in comparison with others. I’m not claiming to be brain damaged after all, of cause I care as well to some (considerably less than average) extent about social repercussions.
Addendum: Agreeableness is also something that is known to rise with progressing age, so it’s likely that I will become more agreeable over time, seeing how I’m still just 23. Another factor in agreeableness is impulsiveness, which thankfully diminishes with age—and I’m a fairly impulsive person. Agreeableness isn’t just composed of “one thing”, it’s the result of several interactions.
I’m 19, and I’m already one of the most agreeable and least impulsive people I know. I’m fucked...
Maybe you should consider a career in politics where having a spine is optional :P
EDIT: Wait, what am I saying… it’s of cause not optional but actually prohibitively costly.
No way! There’s a possibility I wouldn’t be able to keep everyone happy all the time! There’s a possibility people would dislike me for policies I implemented! It would be WAY too stressful!
Second time I catch this, so it may not be a mere typo. Did you mean “of c_our_se”, in the sense of “obviously”?
English is my 3rd language, so unfortunately it wasn’t really just a typo. Now that you pointed it out of course the mistake is obvious to me.