Moral intuitions (i.e, ‘kneejerk reactions’) are what fuels many people’s opinions. Can we do better on LW? Meta-ethical systems (consequentialism, deontology) are often used as post-hoc rationalizations for said moral intuitions, but can we do better?
For these kind of problems I especially like Kant’s approach—can we come up with a rule that underlies our opinion on something, and would we be willing to follow that rule, even if it goes against our immediate intuitions in some other case? And the more specific a rule gets (ie., ‘this only applies to green people’, the clearer is the sign that we’re doing some special pleading.
How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place? How is it better to replace following intuitions with following an imperfect simplification derived from an intuition?
I have been thinking these past months that I could somehow be immune from or outside of the necessity of having my intuitions dictate my values. Someone pointed out to me that it was essentially an intuition of mine that separating from this source of morality would be a good idea, and since then I have been trying to figure out how to live with being just an evolutionarily determined set of arbitrary (to anyone outside the system) values.
How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place? How is it better to replace following intuitions with following an imperfect simplification derived from an intuition?
We contemplate our moral intuitions and intuitively abstract rules from them, and have the intuition that such rules should be followed. Yet the rules may turn out to violate other intuitions. The problem is not rules against intuitions, but intuitions against intuitions.
Well, deriving and following a rule can allow for consistent behavior across sets of situations where my intuitions are inconsistent. If I value consistency, I might endorse that.
Good point. So we have now had astronomy for more than 2000 years, thanks Astrology!
What have we gotten from doing Ethics? What has moral realism delivered? I suppose you might say a population easier to rule, and that would be something indeed, but before I put words in your mouth, you tell me what you get for having tried to systematize morality for 4000 years?
Yes. Though not more than once. Incidentally, I don’t accept that adding the “and nothing else” clause preserves the meaning of my original comment. Which is fine; you’re under no obligation to preserve that meaning, I just wanted to make that explicit.
I don’t accept that adding the “and nothing else” clause preserves the meaning of my original comment.
Since we are talking about how to form a system of morality, where it might come from, and what might be good or bad about doing so, if there is some source of morality that youare presuming that has not yet entered the discussion, by all means, please, let ’er rip. I would prefer knowing what it is than to merely knowing that you may or may not have one in your pocket that you haven’t stated.
I have not claimed a hidden source of morality, nor do I possess one, so you can rest easy on that score.
But deriving a rule, or a consistent set of rules, or a system of morality based on my moral intuitions and my knowledge of the world is different from deriving it based on my moral intuitions and nothing else, even if my knowledge of the world is not itself a source of morality.
Its better if you tell me what you think and I don’t have to guess. I don’t see how a moral intuition could ever even appear absent some knowledge of the world, these are feelings which arise in response to situations we find ourselves in and (at least we think) comprehending.
Ir your systematized morality is “better” than your non-systematized moral intuitions, please tell me, at least through examples,
1) How it is different and
2) How you know (or at least why you think) it is better.
I’m not asserting that moral intuitions can arise without any knowledge of the world.
But not all of my knowledge of the world plays a significant role in the formation of my moral intuitions, for various reasons, any more than all of my knowledge of the world plays a significant role in the formation of my physical and social intuitions.
And (as I’ve said repeatedly) taking all of that knowledge into account along with my moral intuitions when deriving moral rules can lead to a different set of rules than deriving those moral rules based on my intuitions and nothing else (as you initially framed the question).
1) How it is different and 2) How you know (or at least why you think) it is better.
As I said in the first place, the potential value of a systematized moral framework is that it can allow for consistent behavior across sets of situations where my intuitions are inconsistent, and some people value consistency.
If that’s not clear enough to preclude the need for guesswork, I apologize for the lack of clarity. If you have specific questions or challenges I’ll try to address them. If I’m just not making any sense at all, I’d prefer to drop this exchange here.
How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place?
Well, in mathematics and science we made a lot of progress when we stopped doing the latter and started doing the former.
Yes. In science and math we had reality against which to measure our progress.
What do you measure your progress against in coming up with a moral system? If it is the extent to which your moral system matches your moral intuitions, you will never do better than just following your intuitions.
If you are measuring your progress against something else, do say what it is. I know I have been searching for decades for some way to make morality objective.
If there is nothing against which to measure your progress, than following your intuitions is immeasurably better or worse than making up a system based on SOME of your intuitions.
I would say that for someone who accepts liberal ideas (counting most conservatives in western countries), this seems like a very useful argument for convincing them of this: If we always used intuitional morality, we would currently have morality that disagrees with their intuitions (about slavery being wrong, democracy being good, those sorts of things).
Of course, as a rational argument it makes no sense. It just appeals to me because my intuitions are Consequentialist and I want to try to convince others to follow Consequentialism, because it will lead to better outcomes.
Yes. In science and math we had reality against which to measure our progress.
Except how do you measure something against reality in a way that doesn’t (at least implicitly) rely on your intuitions?
What do you measure your progress against in coming up with a moral system? If it is the extent to which your moral system matches your moral intuitions,
Well, this is more-or-less what we do in mathematics.
I can routinely travel thousands of miles in a few hours at extremely finite cost. Our modern society gives US citizens on average the benefit of 25 humans worth of energy usage (that is, the amount of energy per day used by the average american would require 25 slaves to generate if human slaves were used to generate energy).
Even in math, I can build my understanding in to circuits which by working, verify my mathematical reasoning, and more importantly, verify that the reasoning stands independently of my own feelings or intuitions about it it. I routinely calculate things and then build software to implement them that 1) either works as I expected from my mathematical calculations, or 2) doesn’t, in which case, so far, I have always been able to find that I made a mistake in my calculations, or in my interpretation of how my implementation was related to my calculations.
I’ll admit some theoretical intuitive component to understanding the connection between science/math and real benefits that come from it.
But it isn’t just that I am privileging math/science in a way I refuse to privilege moral reasoning. It is that I don’t even know what benefits for systematized morality you are claiming. What do I expect as my payoff for systematizing morality, that I may perhaps have to make some intuitive leaps to notice? What does systematized morality offer us that merely relying on moral intuition in a non-systematic way doesn’t do just as well?
This is a real question, not some rhetorical question to say “see, I am right.” What do you get out of throwing your faith behind moral realism and systematizing it?
I think determining some underlying rule can help me make (subjectively of course) better judgements, which have a much better chance of being consistent (as TheOtherDave mentions).
It’s much too easy for the emotional machinery in our brains to be hijacked by images of baby-seals, terrorists, etc., and I feel my judgements are better if I can use some underlying rules rather than my intuitions.
If all you have to base your moral system on is your intuitions, then the best you can hope for in a “consistent” systematization is to do no worse than flipping a coin when you have conflicting intuitions.
I suppose what I am really reacting to is that it strikes me that carefully systematizing morality makes as much sense as carefully systematizing astrology. The details and the calculations and the cogitation serve to give the illusion of there being something there while in actuality… all you have is Rationality Theater.
True, at some point intuitions come into play (unless you are some kind of Spock), to determine your personal moral bedrock. But at least for me, these intuitions are not all born equal, and not all intuitions are part of this bedrock.
A typical example would be: ‘Cute animals are more important’, which may conflict with some deeper rule in some situation. Instead of just following my intuition with that first rule, I think my moral judgements are better when I take a step back and try to use the deeper rule.
If all you have to base your moral system on is your intuitions, then the best you can hope for in a “consistent” systematization is to do no worse than flipping a coin when you have conflicting intuitions.
You are on to something, science in some sense is taken on faith and morality in a similar sense is taken on faith.
But the faiths are different. The faith of science is a testable faith. Either you build stuff that works or you don’t. if your musings about thermodynamics lead to a steam engine and later to an air conditioner, and your musings about electrons in a semiconductor lead to a transistor and later to a smartphone, well, that is what your high priests of science can bring you.
What is the test of a faith in moral realism? I don’t wish to answer with a strawman that I will knock down, I really want to know, how do you evaluate if your moral system is doing a good job? Do you measure fewer inconsistencies in intuition? Do you get elected to the senate? Do people vote up your karma?
Science leads to jet aircraft and HD TVs and hip replacements. 2 out of 3 Abrahamic religions lead toenjoyable promises of an eternity of bliss.
What is the promise of a moral system? What is the thing it claims to give me that I don’t have just following my intuitions in a non-systematic way? I know what the high-priests of science are claiming for their mojo, and it sure seems to me they deliver. (And they don’t require me to believe in their mumbo jumbo “induction” stuff in order to use their jet aircraft and smartphones). What are the moral realists offering? And even more important, what are they delivering?
The best answer I can give you is that a moral realist today is currently in the same situation as a physical realist was before the development of the scientific method. There were lots of competing not-quite coherent theories of what it means for something to be real, but if you asked 100 people they would all agree on whether something was a rock or a glass of milk barring weirdness. Similarly, today there are lots of competing not-quite coherent theories of what it means for something to be moral, but if you asked 100 people they would all agree that killing an innocent person is wrong barring weirdness.
(The above is paraphrased from another comment that I can’t locate right now.)
So perhaps we still await the development of “the moral method.”
It does strike me, and I mean I have not thought of this really until right now, that law and government are the engineering branches of “the moral method” of “moral realism” as “the scientific method” corresponds to “physical realism.” Economics and Sociology may be the Physics and Chemistry of “moral realism.” The progress that law and government have enabled are an economic productivity contributed to by billions of people (or at least 100s of millions) which dwarfs that of our predecessors in the same way that our technology dwarfs that of our predecessors.
There are at least a few interesting things about this idea. One need not “believe” in science to use the fruits of it, whereas plausibliy a belief in science is necessary to contribute to developing its progress. One can be an anarchist or a communist or an ignoramus or a nihilist and benefit from the modern economy and unprecedented levels of personal security in society. Presumably any “realism” would have implications that did not depend on the state of belief in the thing which is real.
What my off-the-cuff thesis lacks is any neessity for the truth-or-falsehood of moral statements. “You ought to obey the law” or “killing in a way which is against the law is wrong” are NOT required to be meaningful statements with an objective truth value. Or are they? In some sense, the truth value of scientific statements require the assumptions of logic and induction. One could say that it is not necessary to have a truth value associated with “all electrons repel each other” in order for me to build a smartphone which will only work if its untested electrons act the same in the future as the very very few electrons I have actually tested in the past. So perhaps “de facto” as it were, the practitioners and advancers of the law and government have a belief in “the moral method” just as non-philosopher scientists and engineers seem to have a “de facto” belief in induction.
This identification of law and government with the stuff of moral realism even has the feature that it can be wrong, or wrong-ish, just like science and engineering. ALL engineering design is done using approximations of physics. That is, we KNOW the principles behind our designs our “wrong” in that they are inexact approximations for what is really happening. We then use trial and error to develop an art of design which “usually” works, which usually keeps the thing we are designing away from where the inaccuracies of our design assumptions matter. Heck we even have the idea that there can be better and worse law and government just as there are better and worse science.
To stretch the analogy past all reason, can I say something interesting about the moral discussions that to me seem typical and which make me want to be a nihilist? These are the discussions of “my morality comes from moral intuitions but one of my intuitions is my morality should be consistent so I build these elaborate personal strutures instead of just doing what feels right.” Their analogy in science might be someone who assiduously records all sorts of personal data to advance his health without a clue that his better option would be to plug in to the progress made in medical research. Someone who attempts to build his own smartphone through introspection instead of getting the professional product.
I don’t know. Now I’ll have to read about philosophy of law and government to discover that everything I’ve just said has been said before, its flaws categorized into labeled branches of belief. But for now I’m pretty happy with the concept and feel as though I’ve just invented something even though I’ve probably just dredged it up from things I’ve heard and read over the last half a century and, at least consciously, forgotten.
It does strike me, and I mean I have not thought of this really until right now, that law and government are the engineering branches of “the moral method” of “moral realism” as “the scientific method” corresponds to “physical realism.” Economics and Sociology may be the Physics and Chemistry of “moral realism.”
Given the current state of economics and sociology I’d replace chemistry with alchemy in that metaphor. Also, foundational systems like utilitarianism and deontology are the equivalent of astronomy/astrology before they got separated.
To stretch the analogy past all reason, can I say something interesting about the moral discussions that to me seem typical and which make me want to be a nihilist? These are the discussions of “my morality comes from moral intuitions but one of my intuitions is my morality should be consistent so I build these elaborate personal strutures instead of just doing what feels right.” Their analogy in science might be someone who assiduously records all sorts of personal data to advance his health without a clue that his better option would be to plug in to the progress made in medical research. Someone who attempts to build his own smartphone through introspection instead of getting the professional product.
A better analogy might be someone who believes that he can develop a physical theory simply by introspection without looking at the world. (It was a popular philosophical position before the scientific method was developed, after all that’s how mathematics works and it had been successful.)
I can see that… one of the obvious problems that we can find some case where the meta-ethical systems go against our moral intuitions. This sometimes leads to attempt to make the meta-ethics incorporate this case (and then some more), but I feel it quickly becomes rather obvious that we cannot come up with any consistent system that also satisfies our intuitions. I’m a bit pessimistic philosophers will resolve this problem soon...
On a more happy note, I have found Kant’s reasoning very useful for my own personal opinion-making, by constantly reminding me that if I find X about, say, genetically-modified food, nuclear energy etc., I really need to make my opinion in terms of a rule that doesn’t include the particular case, and I try to think what this same rule would mean for other opinions I hold.
I can see that… one of the obvious problems that we can find some case where the meta-ethical systems go against our moral intuitions. This sometimes leads to attempt to make the meta-ethics incorporate this case (and then some more), but I feel it quickly becomes rather obvious that we cannot come up with any consistent system that also satisfies our intuitions. I’m a bit pessimistic philosophers will resolve this problem soon...
Reasoning about, e.g., mathematics or physics has the same problem, and yet in those fields we can still build the system on our intuitions while accepting that they’re sometimes wrong.
It seems to me that Utilitarianism can be similar to the way you describe Kant’s approach: Selecting a specific part of our intuitions- “Actions that have bad consequences are bad”- ignoring the rest, and then extrapolating from that. Well, that and coming up with a utility function. Still, it seems to me that you can essentially apply it logically to situations and come up with decisions based on actual reasoning: You’ll still have biases, but at least (besides editing utility functions) you won’t be editing your basic morality just to follow your intuitions.
Of course, as mwengler notes, we’re just replacing our arbitrary set of moral intuitions, with a cohesive, logical system based on… one of those arbitrary moral intuitions. I’m pretty sure there’s no solution to that; the only justification for being moral at all is our moral intuitions. Still, if you are going to be moral, I find Utilitarianism preferable to intuitional morality… actually, I guess mainly because I’d already been a Utilitarian for awhile before realizing morality was arbitrary, so my moral intuitions have changed to be consequentialist. Oh well. :/
Moral intuitions (i.e, ‘kneejerk reactions’) are what fuels many people’s opinions. Can we do better on LW? Meta-ethical systems (consequentialism, deontology) are often used as post-hoc rationalizations for said moral intuitions, but can we do better?
For these kind of problems I especially like Kant’s approach—can we come up with a rule that underlies our opinion on something, and would we be willing to follow that rule, even if it goes against our immediate intuitions in some other case? And the more specific a rule gets (ie., ‘this only applies to green people’, the clearer is the sign that we’re doing some special pleading.
How is coming up with a rule based on our moral intuitions and then following that rule even when it means violating our intuitions any better than just following intuitions in the first place? How is it better to replace following intuitions with following an imperfect simplification derived from an intuition?
I have been thinking these past months that I could somehow be immune from or outside of the necessity of having my intuitions dictate my values. Someone pointed out to me that it was essentially an intuition of mine that separating from this source of morality would be a good idea, and since then I have been trying to figure out how to live with being just an evolutionarily determined set of arbitrary (to anyone outside the system) values.
You can’t get away from your intuitions.
We contemplate our moral intuitions and intuitively abstract rules from them, and have the intuition that such rules should be followed. Yet the rules may turn out to violate other intuitions. The problem is not rules against intuitions, but intuitions against intuitions.
Well, deriving and following a rule can allow for consistent behavior across sets of situations where my intuitions are inconsistent. If I value consistency, I might endorse that.
If you value consistency, AND your moral system is derived from your moral intuitions and nothing else, AND your moral intuitions are inconsistent...
If it walks like a science and it talks like a science but it is astrology, is it worth doing the calculations?
When you consider that “doing the calculations” is how astronomy was ultimately derived from and separated from astrology quiet possibly.
Good point. So we have now had astronomy for more than 2000 years, thanks Astrology!
What have we gotten from doing Ethics? What has moral realism delivered? I suppose you might say a population easier to rule, and that would be something indeed, but before I put words in your mouth, you tell me what you get for having tried to systematize morality for 4000 years?
Yes. Though not more than once.
Incidentally, I don’t accept that adding the “and nothing else” clause preserves the meaning of my original comment. Which is fine; you’re under no obligation to preserve that meaning, I just wanted to make that explicit.
Since we are talking about how to form a system of morality, where it might come from, and what might be good or bad about doing so, if there is some source of morality that youare presuming that has not yet entered the discussion, by all means, please, let ’er rip. I would prefer knowing what it is than to merely knowing that you may or may not have one in your pocket that you haven’t stated.
I have not claimed a hidden source of morality, nor do I possess one, so you can rest easy on that score.
But deriving a rule, or a consistent set of rules, or a system of morality based on my moral intuitions and my knowledge of the world is different from deriving it based on my moral intuitions and nothing else, even if my knowledge of the world is not itself a source of morality.
Its better if you tell me what you think and I don’t have to guess. I don’t see how a moral intuition could ever even appear absent some knowledge of the world, these are feelings which arise in response to situations we find ourselves in and (at least we think) comprehending.
Ir your systematized morality is “better” than your non-systematized moral intuitions, please tell me, at least through examples, 1) How it is different and 2) How you know (or at least why you think) it is better.
I’m not asserting that moral intuitions can arise without any knowledge of the world.
But not all of my knowledge of the world plays a significant role in the formation of my moral intuitions, for various reasons, any more than all of my knowledge of the world plays a significant role in the formation of my physical and social intuitions.
And (as I’ve said repeatedly) taking all of that knowledge into account along with my moral intuitions when deriving moral rules can lead to a different set of rules than deriving those moral rules based on my intuitions and nothing else (as you initially framed the question).
As I said in the first place, the potential value of a systematized moral framework is that it can allow for consistent behavior across sets of situations where my intuitions are inconsistent, and some people value consistency.
If that’s not clear enough to preclude the need for guesswork, I apologize for the lack of clarity. If you have specific questions or challenges I’ll try to address them. If I’m just not making any sense at all, I’d prefer to drop this exchange here.
Well, in mathematics and science we made a lot of progress when we stopped doing the latter and started doing the former.
Yes. In science and math we had reality against which to measure our progress.
What do you measure your progress against in coming up with a moral system? If it is the extent to which your moral system matches your moral intuitions, you will never do better than just following your intuitions.
If you are measuring your progress against something else, do say what it is. I know I have been searching for decades for some way to make morality objective.
If there is nothing against which to measure your progress, than following your intuitions is immeasurably better or worse than making up a system based on SOME of your intuitions.
I would say that for someone who accepts liberal ideas (counting most conservatives in western countries), this seems like a very useful argument for convincing them of this: If we always used intuitional morality, we would currently have morality that disagrees with their intuitions (about slavery being wrong, democracy being good, those sorts of things).
Of course, as a rational argument it makes no sense. It just appeals to me because my intuitions are Consequentialist and I want to try to convince others to follow Consequentialism, because it will lead to better outcomes.
Except how do you measure something against reality in a way that doesn’t (at least implicitly) rely on your intuitions?
Well, this is more-or-less what we do in mathematics.
I can routinely travel thousands of miles in a few hours at extremely finite cost. Our modern society gives US citizens on average the benefit of 25 humans worth of energy usage (that is, the amount of energy per day used by the average american would require 25 slaves to generate if human slaves were used to generate energy).
Even in math, I can build my understanding in to circuits which by working, verify my mathematical reasoning, and more importantly, verify that the reasoning stands independently of my own feelings or intuitions about it it. I routinely calculate things and then build software to implement them that 1) either works as I expected from my mathematical calculations, or 2) doesn’t, in which case, so far, I have always been able to find that I made a mistake in my calculations, or in my interpretation of how my implementation was related to my calculations.
I’ll admit some theoretical intuitive component to understanding the connection between science/math and real benefits that come from it.
But it isn’t just that I am privileging math/science in a way I refuse to privilege moral reasoning. It is that I don’t even know what benefits for systematized morality you are claiming. What do I expect as my payoff for systematizing morality, that I may perhaps have to make some intuitive leaps to notice? What does systematized morality offer us that merely relying on moral intuition in a non-systematic way doesn’t do just as well?
This is a real question, not some rhetorical question to say “see, I am right.” What do you get out of throwing your faith behind moral realism and systematizing it?
That is a good question.
I think determining some underlying rule can help me make (subjectively of course) better judgements, which have a much better chance of being consistent (as TheOtherDave mentions).
It’s much too easy for the emotional machinery in our brains to be hijacked by images of baby-seals, terrorists, etc., and I feel my judgements are better if I can use some underlying rules rather than my intuitions.
If all you have to base your moral system on is your intuitions, then the best you can hope for in a “consistent” systematization is to do no worse than flipping a coin when you have conflicting intuitions.
I suppose what I am really reacting to is that it strikes me that carefully systematizing morality makes as much sense as carefully systematizing astrology. The details and the calculations and the cogitation serve to give the illusion of there being something there while in actuality… all you have is Rationality Theater.
True, at some point intuitions come into play (unless you are some kind of Spock), to determine your personal moral bedrock. But at least for me, these intuitions are not all born equal, and not all intuitions are part of this bedrock.
A typical example would be: ‘Cute animals are more important’, which may conflict with some deeper rule in some situation. Instead of just following my intuition with that first rule, I think my moral judgements are better when I take a step back and try to use the deeper rule.
Well, the same problem exists in science but that hasn’t stopped us from making progress.
You are on to something, science in some sense is taken on faith and morality in a similar sense is taken on faith.
But the faiths are different. The faith of science is a testable faith. Either you build stuff that works or you don’t. if your musings about thermodynamics lead to a steam engine and later to an air conditioner, and your musings about electrons in a semiconductor lead to a transistor and later to a smartphone, well, that is what your high priests of science can bring you.
What is the test of a faith in moral realism? I don’t wish to answer with a strawman that I will knock down, I really want to know, how do you evaluate if your moral system is doing a good job? Do you measure fewer inconsistencies in intuition? Do you get elected to the senate? Do people vote up your karma?
Science leads to jet aircraft and HD TVs and hip replacements. 2 out of 3 Abrahamic religions lead toenjoyable promises of an eternity of bliss.
What is the promise of a moral system? What is the thing it claims to give me that I don’t have just following my intuitions in a non-systematic way? I know what the high-priests of science are claiming for their mojo, and it sure seems to me they deliver. (And they don’t require me to believe in their mumbo jumbo “induction” stuff in order to use their jet aircraft and smartphones). What are the moral realists offering? And even more important, what are they delivering?
The best answer I can give you is that a moral realist today is currently in the same situation as a physical realist was before the development of the scientific method. There were lots of competing not-quite coherent theories of what it means for something to be real, but if you asked 100 people they would all agree on whether something was a rock or a glass of milk barring weirdness. Similarly, today there are lots of competing not-quite coherent theories of what it means for something to be moral, but if you asked 100 people they would all agree that killing an innocent person is wrong barring weirdness.
(The above is paraphrased from another comment that I can’t locate right now.)
I realize that the above may not be the most satisfying answer, especially if the history of philosophy isn’t available for you.
So perhaps we still await the development of “the moral method.”
It does strike me, and I mean I have not thought of this really until right now, that law and government are the engineering branches of “the moral method” of “moral realism” as “the scientific method” corresponds to “physical realism.” Economics and Sociology may be the Physics and Chemistry of “moral realism.” The progress that law and government have enabled are an economic productivity contributed to by billions of people (or at least 100s of millions) which dwarfs that of our predecessors in the same way that our technology dwarfs that of our predecessors.
There are at least a few interesting things about this idea. One need not “believe” in science to use the fruits of it, whereas plausibliy a belief in science is necessary to contribute to developing its progress. One can be an anarchist or a communist or an ignoramus or a nihilist and benefit from the modern economy and unprecedented levels of personal security in society. Presumably any “realism” would have implications that did not depend on the state of belief in the thing which is real.
What my off-the-cuff thesis lacks is any neessity for the truth-or-falsehood of moral statements. “You ought to obey the law” or “killing in a way which is against the law is wrong” are NOT required to be meaningful statements with an objective truth value. Or are they? In some sense, the truth value of scientific statements require the assumptions of logic and induction. One could say that it is not necessary to have a truth value associated with “all electrons repel each other” in order for me to build a smartphone which will only work if its untested electrons act the same in the future as the very very few electrons I have actually tested in the past. So perhaps “de facto” as it were, the practitioners and advancers of the law and government have a belief in “the moral method” just as non-philosopher scientists and engineers seem to have a “de facto” belief in induction.
This identification of law and government with the stuff of moral realism even has the feature that it can be wrong, or wrong-ish, just like science and engineering. ALL engineering design is done using approximations of physics. That is, we KNOW the principles behind our designs our “wrong” in that they are inexact approximations for what is really happening. We then use trial and error to develop an art of design which “usually” works, which usually keeps the thing we are designing away from where the inaccuracies of our design assumptions matter. Heck we even have the idea that there can be better and worse law and government just as there are better and worse science.
To stretch the analogy past all reason, can I say something interesting about the moral discussions that to me seem typical and which make me want to be a nihilist? These are the discussions of “my morality comes from moral intuitions but one of my intuitions is my morality should be consistent so I build these elaborate personal strutures instead of just doing what feels right.” Their analogy in science might be someone who assiduously records all sorts of personal data to advance his health without a clue that his better option would be to plug in to the progress made in medical research. Someone who attempts to build his own smartphone through introspection instead of getting the professional product.
I don’t know. Now I’ll have to read about philosophy of law and government to discover that everything I’ve just said has been said before, its flaws categorized into labeled branches of belief. But for now I’m pretty happy with the concept and feel as though I’ve just invented something even though I’ve probably just dredged it up from things I’ve heard and read over the last half a century and, at least consciously, forgotten.
Given the current state of economics and sociology I’d replace chemistry with alchemy in that metaphor. Also, foundational systems like utilitarianism and deontology are the equivalent of astronomy/astrology before they got separated.
A better analogy might be someone who believes that he can develop a physical theory simply by introspection without looking at the world. (It was a popular philosophical position before the scientific method was developed, after all that’s how mathematics works and it had been successful.)
Doing this is, of course, a major project in philosophy. Many attempts have serious problems.
I can see that… one of the obvious problems that we can find some case where the meta-ethical systems go against our moral intuitions. This sometimes leads to attempt to make the meta-ethics incorporate this case (and then some more), but I feel it quickly becomes rather obvious that we cannot come up with any consistent system that also satisfies our intuitions. I’m a bit pessimistic philosophers will resolve this problem soon...
On a more happy note, I have found Kant’s reasoning very useful for my own personal opinion-making, by constantly reminding me that if I find X about, say, genetically-modified food, nuclear energy etc., I really need to make my opinion in terms of a rule that doesn’t include the particular case, and I try to think what this same rule would mean for other opinions I hold.
Reasoning about, e.g., mathematics or physics has the same problem, and yet in those fields we can still build the system on our intuitions while accepting that they’re sometimes wrong.
It seems to me that Utilitarianism can be similar to the way you describe Kant’s approach: Selecting a specific part of our intuitions- “Actions that have bad consequences are bad”- ignoring the rest, and then extrapolating from that. Well, that and coming up with a utility function. Still, it seems to me that you can essentially apply it logically to situations and come up with decisions based on actual reasoning: You’ll still have biases, but at least (besides editing utility functions) you won’t be editing your basic morality just to follow your intuitions.
Of course, as mwengler notes, we’re just replacing our arbitrary set of moral intuitions, with a cohesive, logical system based on… one of those arbitrary moral intuitions. I’m pretty sure there’s no solution to that; the only justification for being moral at all is our moral intuitions. Still, if you are going to be moral, I find Utilitarianism preferable to intuitional morality… actually, I guess mainly because I’d already been a Utilitarian for awhile before realizing morality was arbitrary, so my moral intuitions have changed to be consequentialist. Oh well. :/