Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?
For me to credit 2) (Morality is true), I would need to know that 2) is a statement that is actually distinguishable in the world from 1). Someone tells me electrons attract other electrons, we do a test, turns out to be false, electrons repel other electrons is a true statement beyond preference. Someone else tells me electrons repel each other because they hate each other. Maybe some day we will learn how to talk to electrons, but until then this is not testable, not tested, and so the people who come down on each side of this question are not talking about truth.
Someone tells me Morality has falsifiable truths in it, where is the experimental test? Name a moral proposition and describe the test to determine its falsehood or truthiness. If the proponent of moral truth did this, I missed it an need to be reminded.
If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true. I am happy to label this difference “scientific” or “fact-based” but of course the danger of labels is they carry freight from their pasts. But however you choose to label it, is their a proponent of the existence of moral truth who can propose a test, or will these proponents accept that “moral truth is more like truths about electrons hating each other and less like truths about electrons repelling each other?”
Note that in discussing the proposition “electrons hate each other” I actually proposed a test of it’s truth, but pointed out we did not yet know how to do that test. If we say “we will NEVER know how to do that test, its just dopey” are we saying something scientific? Something testable? I THINK not, I think this is an unscientific claim. But maybe some chain of scientific discovery will put us in a place where we can test statements about what will NEVER be knowable. I personally do not know how to do that now, though. So if I hold an opinion that electrons neither hate nor love each other, I hold it as an opinion, knowing it might be true, it might be false, and/or it might be meaningless in the real world.
So then what of Moral “Truths?” For the moment, at my state of knowledge, they are like statements about the preferences of electrons. Maybe there are moral truths but I don’t know how to learn any of them as facts and I am not aware of anyone who has presented a moral truth and a test for its truthiness. Maybe some day…
But in the meantime, everybody who tells me there are moral truths and especially anybody who tells me “X is one of those moral truths” gets tossed in the dustbin labeled “people who don’t know the difference between opinion and truth.” Is murder wrong, is that a fact? If by murder you mean killing people, you cannot find a successful major civilization that has EVER appeared to believe that. Self-defense, protection of those labeled “innocent,” are observed to justify homicide in societies that I am aware of.
But suppose by murder we mean “unjustifiable homicide? Well then you are either in tautology land (murder is defined as killing which is wrong) or you have kicked the can down the road to a discussion of what justifies homicide, and now you need to propose tests of your hypotheses about what justifies homicide.
So even if there is “moral truth,” if you can’t propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.
G.E. Moore is famous for this argument against external world skepticism: “How do I know I have hands?” (he raises his hands in front of his face) “Here! Here are my hands!”. His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.
I think something similar might apply here. To say that morality is ‘objective’ or ‘subjective’ may be an equivocation or category mistake, but if I understand anything, I understand that slavery is wrong. I can’t falsify this, or reduce it to some more basic principle because there is nothing more basic, and no possible world in which slavery is right. A world in which the alternative is true cannot be tested for because it is wholly inconceivable.
His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.
A world in which the alternative is true cannot be tested for because it is wholly inconceivable.
A world in which I do not have hands is totally conceivable and easily tested for. So it would appear you have at least this gigantic difference between “Slavery is wrong” differs from your G.E. Moore analogy source statement.
To say that it is obvious that “slavery is wrong” does not rule out this being a statement of preference, does it? I would rather be slowly sucked to orgasm by healthy young females than to have my limbs and torso crushed and ground while immersed in a strong acid. This is AT LEAST as obvious to me as “slavery is wrong” is obvious to you, I would bet, yet it is quite explicitly a statement of preference.
To say that it is obvious that “slavery is wrong” does not rule out this being a statement of preference, does it?
That’s a fair point, but I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. Lots of people have chosen painful deaths over long and pleasant lives and we’ve rightly praised them for it. So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.
A world in which I do not have hands is totally conceivable and easily tested for.
That wasn’t quite the point. The analogue here wouldn’t be between the hands and the moral principle. The analogue is this: how surely do you know this epistemic rule about falsification? Do you know it more surely than you know that slavery is wrong? I for one, am vastly more sure that slavery is wrong than I am that instrumentalism or falsificationism is the correct epistemic theory.
I may be misguided, of course, so I won’t say that instrumentalist epistemology can’t in principle call my moral idea into question. But it seems absurd to assume that is does.
I think your point about falsification is a good one. I in fact believe in falisifiability in some powerful sense of the word believe. I suspect a positive belief in falsifiability is at least weakly falsifiable. With time and resources one could look for correlations between belief in falsifiability and various forms of creativity and understanding. I would expect to find it highly correlated with engineering, scientific, and mathematical understanding and progress.
Of course “proving” falisifiabilty by using falisifiability is circular. In my own mind I fall back on instrumentalism: I claim I’m interested in learning falisifiable things about the world and don’t care whether we call them “true” or not and don’t care whether you call other non-falsifiable statements true or not, I’m interested in falsifiable ones. Behind or above that belief is my belief that I really want power, I want to be able to do things, and that it is the falsifiable statements only that allow me to manipulate the environment effectively: since non-falsifiable statements almost by definition don’t help me in manipulating the world in which I would be trying to falsify them.
Is a statement like “Slavery is wrong” falsifiable? Or even “Enslaving this particular child in this particular circumstance”? I think they are not “nakedly” falsifiable and in fact have zero problem imagining a world in which at least some people do not think they are wrong (we live in that world). I think the statement “Slavery is wrong because it reduces average happiness” is falsifiable. “Slavery is wrong because it misallocates human resources” is falsifiable. These reflect instrumentalist THEORIES of morality, theories which it does not seem to be could be falsifiable.
So I have an assumption of falsifiability. You may have an assumption of what is moral. I admit the symmetry.
I can tell you the “I can’t imagine it” test fails in epic fashion in science. One of the great thrills of special relativity and quantum mechanics is that they are so wildly non-intuitive for humans, and yet they are so powerfully instrumentally true in understanding absolute reams of phenomenon allowing us to correctly design communications satellites and transistors to name just two useful instrumentalities. So I suppose my belief against ” I can’t imagine it” as a useful way to learn the truth is a not-necessarily-logical extension of a powerful truth from one domain that I respect powerfully in to other domains.
Further, I CAN imagine a world in which slavery is moral. I can go two ways to imagine this: 1) mostly we don’t mind enslaving those who are not “people.” Are herds of cattle for food immoral? Is it unimaginable that they are moral? Well if you can’t imagine that is moral, what about cultivated fields of wheat? Human life in human bodies ends if we stop exploiting other life forms for nutritition. Sure, you can “draw the line” at chordates for whether cultivating a crop is “slavery” or not. Other people have drawn the line at clan members, family members, nation members, skin-color members. I’m sure there were many white slave holders in the southern U.S. who could not imagine a world in which enslaving white people was moral. Or enslaving British people. Or enslaving British aristocracy. So how far do you go to be sure you are not enslaving anything that shouldn’t be enslaved? Or do you trust your imagination that it is only people (or only chordates), even as you realize how powerfully other people’s imaginations have failed in the past?
I also reject all religious truth based on passed down stories of direct revelations from god. Again, this kind of belief fails epically in doing science, and I extend its failure there in to domains where perhaps it is not so easy to show it fails. And in my instrumentalist soul, I ultimately don’t care whether I am “right” or “wrong,” I would just rather use my limited time, energy, and brain-FLOPs pursuing falsifiable truths, and hope fort he best.
I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. [..] So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.
I understand this to imply that you cannot imagine a world in which you prefer to send someone into slavery than not do so, no matter what the context. Have I understood that correctly?
No, I can easily imagine a world in which I prefer to send someone into slavery than drink a drop of lemon juice: all I have to do is imagine that I’m a bad person. My point was that it’s easy to imagine any world in which my preferences are different, but I cannot imagine a world in which slavery is morally permissible (at least not without radically changing what slavery means).
How about a world in which by sending one person from your planet into slavery you defer the enslavement of the entire earth for 140 years? A world in which alien invaders which outgun us more than Europeans outgunned the tribes in Africa from which many of them took slaves, but who are willing for some reason we can’t comprehend to take one person you pick back to the home planet, 70 light years away. But failing your making that choice, they will stay here and at some expense to themselves enslave our entire race and planet?
Can you now imagine a world in which your sending someone in to slavery is not immoral? If so, how does this change in what you can and cannot imagine change your opinion of either the imagination standard or slavery’s moral status?
It seems to me most likely source of emotions, feelings, is evolution. We aren’t just evolved to run from a sabre tooth tiger, we have a rush of overwhelming fear as the instrumentality of our fleeing effectively. SImilarly, we have evolved, mammals as a whole, not just humans not even just primates, to be “social” animals meaning a tremendously important part of the environment was our group of other mammals. Long before we made the argument that slavery was wrong, we had strong feelings of wanting to resist the things that went along with being enslaved, while apparently we also had power feelings that assisted us in forcing others to do what we wanted.
Given the way emotions probably evolved, I think it does make sense to look to our emotions to guide us in knowing what strategies probably work better than others in interacting with our environment, but it doesn’t make sense to expect them to guide us correctly in corner cases, in rare situations in which there would have not been enough pay-off to have evolution do any fine tuning of emotional responses.
Can you imagine a world in which killing people is morally permissible?
Sure, I live in one. I chose slavery because it’s a pretty unequivocal case of moral badness, while killing is not such as in war, self-defense, execution, etc. I think probably rape, and certainly lying are things which are always morally wrong (I don’t think this entails that one should never do them, however).
My thought is just that at least at the core of them, moral beliefs aren’t subject to having been otherwise. I guess this is true of beliefs about logic too, though maybe not for the same reasons. And this doesn’t make either kind of belief immune to error, of course.
Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?
The trick Objective moralists play is to set truth against preference, when what you have are truths about preferences. Is it true, for you, that ice cream is yummy? For me, it is. That doesn’t make it any less a preference.
Here’s a simple example of a moral claim being tested and falsified:
A: What are you doing with that gun?
B: I’m shooting at this barrel. It’s a lot of fun.
A: What? Don’t do that! It’s wrong.
B: No, it’s not. There’s nothing wrong with shooting at a barrel.
A: But there’s a child inside that barrel! You could kill her.
B: You don’t know what you’re talking about. That barrel’s empty. Go look.
A: [looks in the barrel] Oh, you’re right. Sorry about that.
So here A made a moral claim with which B disagreed (“It’s wrong to shoot at that barrel.”). B proposed a test of the moral claim. A performed the test and the moral claim was falsified.
Now, I anticipate a number of objections to the adequacy of this example. I think they can all be answered, but instead of trying to predict how you will object or tediously listing all the objections I can think of, I’ll just wait for you to object (if you so desire) before responding.
So even if there is “moral truth,” if you can’t propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.
I’m already part of this cadre. I know that electrons do not hate each other.
Without the assumption that shooting a child is immoral, this is not a moral argument. With that as an assumption, the moral component of the conclusion is assumed, not proven.
Find me the proof that shooting a child is immoral and we will be off to a good start.
If you’re looking for a test of a moral claim that does not rely on any background assumptions about morality, then I agree that I can’t give you an example. But that’s because your standard is way too high. When we test scientific hypotheses, the evidence is always interpreted in the context of background assumptions. If it’s kosher for scientific experiments to assume certain scientific facts (as they must), then why isn’t it kosher for moral experiments to assume certain moral facts?
Consider the analog of your position in the descriptive case: someone denies that there’s any fact of the matter about whether descriptive claims about the external world are true or false. This person says, “Show me how you’d test whether a descriptive claim is true or false.” Now you could presumably give all sorts of examples of such tests, but all of these examples will assume the truth of a host of other descriptive claims (minimally, that the experimental apparatus actually exists and the whole experiment isn’t a hallucination). If your interlocutor insisted that you give an example of a test that does not itself assume the truth of any descriptive claim, you would not be able to satisfy him.
So why demand that I must give an example of a test of a moral claim that does not assume the truth of any other moral claim? Why does the moral realist have this extra justificatory burden that the scientific realist does not? It’s fine to have problems with the specific assumptions being made in any particular experiment, and these can be discussed. Perhaps you think my particular assumptions are flawed for some reason. But if you have a general worry about all moral assumptions then you need to tell me why you don’t have a similar worry about non-normative assumptions. If you have a principled reason for this distinction, that would be the real basis of your moral anti-realism, not this business about untestability.
I do think moral and scientific reasoning are far less asymmetric than is usually assumed. But that doesn’t mean I think there are no asymmetries at all. Asymmetries exist, and perhaps they can be leveraged into an argument for moral anti-realism that is not also an argument against scientific realism. So I wouldn’t say it’s inconsistent to be a physical realist and a moral anti-realist. I will say that in my experience most people who hold that combination of positions will, upon interrogation, reveal an unjustified (but not necessarily unjustifiable) double standard in the way they treat moral discourse.
I don’t think it is a double standard. Empiricism admits the Problem of Induction, but says that the problem doesn’t justify retreating all the way to Cartesian skepticism. This position is supported by the fact that science makes good predictions—I would find the regularity of my sensory experiences surprising if physical realism were false. Plus, the principle of falsification (i.e. making beliefs pay rent) tells us what sorts of statements are worth paying attention to.
Moral reasoning seems to lack ay equivalent for either falsification or prediction. I don’t know what it means to try to falsify a statement like “Killings in these circumstances are not morally permissible.” And to the extent that predictions can be made based on the statement, they seem either false or historically contingent—it’s pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.
In short, the problem of induction in empiricism seems very parallel to the is/ought problem in moral philosophy. But moral philosophy seems to lack the equivalent of practical arguments like accurate prediction that seem to rescue empiricism.
I do think one can offer a pragmatic justification for moral reasoning. It won’t be exactly parallel to the justification of scientific reasoning because moral and scientific discourse aren’t in the same business. Part of the double standard I was talking about involves applying scientific standards of evaluation to determine the success of moral reasoning. This is as much of an error as claiming that relativity is false because the nuclear bomb caused so much suffering. We don’t engage in moral reasoning in order to make accurate predictions about sensory experience. We engage in moral reasoning in order to direct action in such a way that our social environment becomes a better place. And I do think we have plenty of historical evidence that our particular system of moral reasoning has contributed to making the world a better place, just as our particular system of scientific reasoning has contributed to our increasing ability to control and predict the behavior of the world.
Now obviously there’s a circularity here. Our standards for judging that the world is better now that slavery is illegal and women can vote are internal to the very moral discourse we purport to be evaluating. But this kind of ultimate circularity is unavoidable when we attempt to justify any system of justification as a whole. It’s precisely the problem Hume pointed out when he talked about induction. Sure, we can appeal to past success as a justification of our inductive practices, but that justification only works if we are already committed to induction. Furthermore, our belief in the past success of the scientific method is based on historical data collected and interpreted in accord with this method. Somebody who rejects the scientific method wholesale may well say “Why should I believe any of these historical claims you are making?”
A completely transcendental justification, one that would be normative to any possible mind in mindspace, is an impossible goal in both moral and scientific reasoning. Any justification you offer for your justificatory practices is ultimately going to appeal to standards that are internal to those practices. That’s something we’ve all learned to live with in science, but there’s still a resistance to this unfortunate fact when it comes to moral discourse.
And to the extent that predictions can be made based on the statement, they seem either false or historically contingent—it’s pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.
Our scientific schemes of justification are historically contingent in the same way. There are a number of other communities (extremely religious ones, for instance) that employ a different set of tools for justifying descriptive claims about the universe. Of course, our schemes of justification are better than theirs, as evidenced by their comparative lack of technological and predictive success. By the same token, though, our moral schemes of justification are more successful than those of, say, fundamentalist Islamic societies, as evidenced by our greater degree of moral progress. In both cases, the members of those other societies would disagree that we have done better than them, but that’s because they have different (and I would say incorrect) standards of evaluation.
Empiricism has only one circularly justified position: You can (more or less) trust the input of your senses—which implies some consistency over time. Everything else follows from that. Modern science is better than pytolemic science because it makes better predictions.
By contrast, there’s essentially no limit to moral circularity. There’s the realism premise: There is a part of the territory called “moral rightness”. Then you need a circular argument to show any particular moral premise (these killings are unjustified) is part of moral rightness. And there are multiple independent moral premises. (When killing is wrong does not shed much light on when lying is wrong). It’s not even clear that there are a finite number of circularly justified assertions.
So I hold empiricism to the same standard as moral realism, and moral realism seems to come up short. Further, my Minimization of Circular Justification principle is justified by worry about the ease of creating a result simply by making in an axiom. (That is, the Pythagorean Theorem is on a different footing if it is introduced as an axiom of Euclidean geometry rather than a derived result).
If your principle is actually that circular justification must be minimized, then why aren’t you an anti-realist about both scientific and moral claims? Surely that would involve less circular justification than your current position. You wouldn’t even have to commit yourself to the one circularly justified position assumed by empiricism.
In any case, scientific reasoning as a whole does not just reduce to the sort of minimal empiricism you describe. For starters, even if you assume that the input of your senses is trustworthy and will continue to remain trustworthy, this does not establish that induction based on the input of your senses is trustworthy. This is a separate assumption you must make. Your minimal empiricism also does not establish that simpler explanations of data tend to be better. This is a third assumption. It also doesn’t establish what it means for one explanation to be simpler than another. It doesn’t establish that the axioms on which the mathematical and statistical tools of science are based are true. I could go on.
Scientific justification as it’s actually practiced in the lab involves a huge suite of tools, and it is not true that the reliability of all these tools can be derived once you accept that you can trust the input of your senses. A person can be an empiricist in your sense while denying the reliability of statistical methods used in science, for instance. To convince them otherwise you will presumably present data that you think establishes the reliability of those methods. But in order for the data to deliver this conclusion, you need to use the same sorts of statistical methods that the skeptic is rejecting. I don’t see how your shared empiricism helps in this situation.
Our schemes of justification, both scientific and moral, have developed through a prolonged process of evolutionary and historical accretion. The specific historical reasons underlying the acceptance of particular tools into the toolbox are complex and variegated. It is implausible in either case that we could reconstruct the entire scheme from one or two simple assumptions.
If you’d like to separate the axiom about the reliability of the senses from the axiom that sensory input will remain consistent, I won’t actively resist—I think reliability of the senses implies consistency of the sense, but I’m not certain my formulation is more technically correct.
Regarding Ockham’s Razor—I’m not sure that is a fundamental principle or a useful rule of thumb. If MWI and Copenhagen really are in evidentiary equipose, I’m not sure I should have a preference for one or the other (that’s obviously not the consensus position in this community).
It doesn’t establish that the axioms on which the mathematical and statistical tools of science are based are true.
I think deductive reasoning produces necessary truths—so in a sense, I get statistics “for free” as long as I accept the Peano axioms. Other than that, I don’t understand the quoted assertion.
More generally, empirical philosophy provides a place to stop the recursion. I don’t think circular justifications work at all, so I think a separate justification for using this stopping place is required—I have memory of consistent sensory impressions, and that is difficult to explain except by believing that consistency is true. One could object that I can’t justify reliance on my memory—so I’m being hypocritical to allow my memories to justify themselves. Maybe so, but there’s no other principled stopping place for recursion—and continuing recursion past this point devolves to the point that I don’t think coherence is a workable concept.
To return to the comparison with morality, I suggest that all the axiomatic assertions in the empirical program are at a fundamental level. When you start doing object level science, recursion goes away entirely. By contrast, object level morality never gets away from [EDIT: recursion]. As you noted, it is impossible to say whether we’ve made moral progress without referencing what moral position is better.
If progress (scientific, moral, etc) really is possible, we ought to be able to get away from recursive reasoning. That we can’t when dealing with moral reasoning is not a good sign that moral reasoning is talking about some objective fact.
When you start doing object level science, recursion goes away entirely. By contrast, object level morality never gets away from morality. As you noted, it is impossible to say whether we’ve made moral progress without referencing what moral position is better.
I don’t know what you mean by “object level morality never gets away from morality”. Read literally, that’s tautologically true, but I don’t see the relevance. Is this a typo?
Also, I’m not seeing the distinction here. When I’m engaged in object-level moral reasoning, or when I read examples of object-level moral reasoning on blogs or in newspapers, I very rarely come across recursion or circular justification. There’s usually an assumption that everyone in the community agrees that certain sorts of fundamental moral inferences are justified, and the debate is about whether those inferences can be made in a particular case. Here is a classic example of object-level moral reasoning. MLK offers a number of justifications for his moral stance on this particular issue. None of these justifications, as far as I can see, are circular. I don’t think this is atypical. Of course, if you think that every moral argument must also simultaneously justify the whole enterprise of objective moral evaluation, then every moral argument will have a circular component. But this places a disproportionately large burden on moral justification.
It’s true that if I want to argue that we have made moral progress I need to take for granted certain moral standards of evaluation, but if I want to argue that we have made scientific progress I need to take for granted certain scientific standards of evaluation. The only difference I can see is that the moral assumptions are as a matter of fact more contentious than the scientific ones, so perhaps moral debate breaks down on disagreement about foundational assumptions more often. But this is at least partly because most scientific debate is usually conducted in an institutional setting that has various mechanisms for consensus formation and weeding out sufficiently recalcitrant dissenters. Outside this setting, debate about descriptive issues is often just as contentious as moral debate. I know a number of new-agey people who have completely bizarre standards of epistemic justification. My discussion with them quite often breaks down on disagreement about foundational assumptions.
There’s usually an assumption that everyone in the community agrees that certain sorts of fundamental moral inferences are justified, and the debate is about whether those inferences can be made in a particular case.
That’s not my sense at all. Moral inferences are fairly easy (compared to cutting-edge scientific inferences). Toy example: If God wants us to attend church, the inference that church attendance should be compelled by the government follows quite easy. There are secondary negative effects, but the only reason to care about them is if the moral assertion that God wants church attendance is false.
When I read political arguments, they almost always operate by assuming agreement on the moral premise. When that assumption is falsified, the argument falls apart. Even for fairly ordinary moral disputes, the argument is usually based on moral principle, not facts or moral inference.
By contrast, equivalently basic scientific questions are fact and inference based. To decide how much weight a bridge can carry, knowing the strength of the steel and the design of the bridge is most of the work. In practice, those types of disputes don’t devolve into arguments about whether gravity is going to work this time.
There are secondary negative effects, but the only reason to care about them is if the moral assertion that God wants church attendance is false.
Unless the secondary effects were that people are more likely to eat bacon for breakfast that day now that they aren’t able to sleep in and it also happens that God doesn’t want people to eat pigs.
Someone tells me Morality has falsifiable truths in it, where is the experimental test?
You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY’s realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam’s razor.
If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true.
Replace “moral truth” with “many worlds”, and you get the EY’s understanding of QM.
Concerns with confusing the map with the territory) are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?
The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam’s razor!
Occam’s razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are “real” or “emulations,” only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.
Do not spend a lot of time filling in the details of unreachable lands on your map.
If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.”
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism—but instrumentalism is not inherently opposed to physical realism.
Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.
A realist finds is perfectly OK to argue which of the many identical maps is “truer” to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone’s satisfaction.
I’m objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.
I’m objecting to your exclusion of instrumentalism from the realist label.
Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don’t really care what label you assign to each position.
Respectfully, you were the one invoking technical jargon to do some analytical work.
Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions—accurately describing reality is harder.
You suggest there is unresolvable tension between those positions.
I think there is physical reality external to human minds.
It’s a useful model, yes.
I think that the best science can do is make better predictions—accurately describing reality is harder.
The assumption that “accurately describing reality” is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.
You suggest there is unresolvable tension between those positions.
Yes, one of them postulates something that cannot be tested. if you are into Occam’s razor, that’s something that fails it.
We can’t talk about testing propositions against reality until we decide whether there is a reality to test it against. If you are uncertain about that point, the nuances between predicting reality and modelling reality are not on point—and probably confuse the analysis more than they shed any light.
If someone walked into one of your high-end physics lectures and wanted to talk about whether there was reality (see Cartesian doubt), I think you would tell him that the physics class was not the venue for that type of conversation. If you tried to answer his questions while also answering other students’ questions, everything would get hopelessly confused.
If the experiment is not a way to tap into reality (in some extremely metaphorical sense), why should I care about the experimental results when trying to decide whether my proposition is true?
If you want to know how far a rock you throw will land (a prediction based on a model constructed based on previously performed experiments), you want your model to have the necessary predictive power. Whether it corresponds to some metaphysical concept of reality is quite secondary.
That doesn’t answer my question. To rephrase using your new example, if the prior experiments do not metaphorically “tap into reality,” why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Well, either the experimental result has predictive power, or it doesn’t. If certain kinds of experimental results prove useful for predicting the future, then I should have confidence in predictions based on (models based on) those results. Whether I call them “reality” or “a model” doesn’t really matter very much.
More generally, to my way of thinking; this whole “instrumentalists don’t believe in reality” business mostly seems like a distinction in how we use words rather than in what experiences we anticipate.
It would potentially make a difference, I suppose, if soi-disant instrumentalists didn’t actually expect the results of different experiments to be reconcilable with one another (under the principle that each experiment was operating on its own model, after all, and there’s no reason to expect those models to have any particular relationship to one another). But for the most part, that doesn’t seem to be the case.
There’s a bit of that when it comes to quirky quantum results, I gather, but to my mind that’s kind of an “instrumentalism of the gaps”… when past researchers have come up with a unified model we accept that unified model, but when current data doesn’t seem unified given our current understanding, rather than seeking a unified model we shrug our shoulders and accept the inconsistency, because hey, they’re just models, it’s not like there’s any real underlying territory.
Which in practice just means we wait for someone else to do the hard work of reconciling it all.
why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Because it has been experimentally confirmed before, and from experience we can assign a high probability that a model that has been working well in the past will continue to work in the similar circumstances in the future.
Is morality 1) a kind of preference with a somewhat different set of emotional flavors associated with it or 2) something which has a true-or-falseness beyond preference?
For me to credit 2) (Morality is true), I would need to know that 2) is a statement that is actually distinguishable in the world from 1). Someone tells me electrons attract other electrons, we do a test, turns out to be false, electrons repel other electrons is a true statement beyond preference. Someone else tells me electrons repel each other because they hate each other. Maybe some day we will learn how to talk to electrons, but until then this is not testable, not tested, and so the people who come down on each side of this question are not talking about truth.
Someone tells me Morality has falsifiable truths in it, where is the experimental test? Name a moral proposition and describe the test to determine its falsehood or truthiness. If the proponent of moral truth did this, I missed it an need to be reminded.
If you believe that there are moral truths, but you cannot propose a test to verify a single one of them, you are engaged in a form of belief that is different from believing in things which are falsifiable and have been found true. I am happy to label this difference “scientific” or “fact-based” but of course the danger of labels is they carry freight from their pasts. But however you choose to label it, is their a proponent of the existence of moral truth who can propose a test, or will these proponents accept that “moral truth is more like truths about electrons hating each other and less like truths about electrons repelling each other?”
Note that in discussing the proposition “electrons hate each other” I actually proposed a test of it’s truth, but pointed out we did not yet know how to do that test. If we say “we will NEVER know how to do that test, its just dopey” are we saying something scientific? Something testable? I THINK not, I think this is an unscientific claim. But maybe some chain of scientific discovery will put us in a place where we can test statements about what will NEVER be knowable. I personally do not know how to do that now, though. So if I hold an opinion that electrons neither hate nor love each other, I hold it as an opinion, knowing it might be true, it might be false, and/or it might be meaningless in the real world.
So then what of Moral “Truths?” For the moment, at my state of knowledge, they are like statements about the preferences of electrons. Maybe there are moral truths but I don’t know how to learn any of them as facts and I am not aware of anyone who has presented a moral truth and a test for its truthiness. Maybe some day…
But in the meantime, everybody who tells me there are moral truths and especially anybody who tells me “X is one of those moral truths” gets tossed in the dustbin labeled “people who don’t know the difference between opinion and truth.” Is murder wrong, is that a fact? If by murder you mean killing people, you cannot find a successful major civilization that has EVER appeared to believe that. Self-defense, protection of those labeled “innocent,” are observed to justify homicide in societies that I am aware of.
But suppose by murder we mean “unjustifiable homicide? Well then you are either in tautology land (murder is defined as killing which is wrong) or you have kicked the can down the road to a discussion of what justifies homicide, and now you need to propose tests of your hypotheses about what justifies homicide.
So even if there is “moral truth,” if you can’t propose a test for for any moral truths, you are happily joining the cadre of people who know the truth of whether electrons hate each other or not.
G.E. Moore is famous for this argument against external world skepticism: “How do I know I have hands?” (he raises his hands in front of his face) “Here! Here are my hands!”. His point was that it is absurd to call the more obvious into doubt by means of the less obvious: By whatever means I might understand an argument supporting skepticism about my hands (say, the Boltzmann Brain argument), by those very means I am all the more sure that I do have hands.
I think something similar might apply here. To say that morality is ‘objective’ or ‘subjective’ may be an equivocation or category mistake, but if I understand anything, I understand that slavery is wrong. I can’t falsify this, or reduce it to some more basic principle because there is nothing more basic, and no possible world in which slavery is right. A world in which the alternative is true cannot be tested for because it is wholly inconceivable.
Further reading: http://en.wikipedia.org/wiki/Here_is_a_hand#Logical_form http://www.overcomingbias.com/2008/01/knowing-your-ar.html http://www.gwern.net/Prediction%20markets#fn41
A world in which I do not have hands is totally conceivable and easily tested for. So it would appear you have at least this gigantic difference between “Slavery is wrong” differs from your G.E. Moore analogy source statement.
To say that it is obvious that “slavery is wrong” does not rule out this being a statement of preference, does it? I would rather be slowly sucked to orgasm by healthy young females than to have my limbs and torso crushed and ground while immersed in a strong acid. This is AT LEAST as obvious to me as “slavery is wrong” is obvious to you, I would bet, yet it is quite explicitly a statement of preference.
That’s a fair point, but I can easily imagine a world in which I prefer being crushed to death than receiving the attentions of some attractive women, so long as you let me add a little context. Lots of people have chosen painful deaths over long and pleasant lives and we’ve rightly praised them for it. So while I agree that the choice you describe is a clear preference of mine, it has none of the strength of my moral belief about slavery.
That wasn’t quite the point. The analogue here wouldn’t be between the hands and the moral principle. The analogue is this: how surely do you know this epistemic rule about falsification? Do you know it more surely than you know that slavery is wrong? I for one, am vastly more sure that slavery is wrong than I am that instrumentalism or falsificationism is the correct epistemic theory.
I may be misguided, of course, so I won’t say that instrumentalist epistemology can’t in principle call my moral idea into question. But it seems absurd to assume that is does.
I think your point about falsification is a good one. I in fact believe in falisifiability in some powerful sense of the word believe. I suspect a positive belief in falsifiability is at least weakly falsifiable. With time and resources one could look for correlations between belief in falsifiability and various forms of creativity and understanding. I would expect to find it highly correlated with engineering, scientific, and mathematical understanding and progress.
Of course “proving” falisifiabilty by using falisifiability is circular. In my own mind I fall back on instrumentalism: I claim I’m interested in learning falisifiable things about the world and don’t care whether we call them “true” or not and don’t care whether you call other non-falsifiable statements true or not, I’m interested in falsifiable ones. Behind or above that belief is my belief that I really want power, I want to be able to do things, and that it is the falsifiable statements only that allow me to manipulate the environment effectively: since non-falsifiable statements almost by definition don’t help me in manipulating the world in which I would be trying to falsify them.
Is a statement like “Slavery is wrong” falsifiable? Or even “Enslaving this particular child in this particular circumstance”? I think they are not “nakedly” falsifiable and in fact have zero problem imagining a world in which at least some people do not think they are wrong (we live in that world). I think the statement “Slavery is wrong because it reduces average happiness” is falsifiable. “Slavery is wrong because it misallocates human resources” is falsifiable. These reflect instrumentalist THEORIES of morality, theories which it does not seem to be could be falsifiable.
So I have an assumption of falsifiability. You may have an assumption of what is moral. I admit the symmetry.
I can tell you the “I can’t imagine it” test fails in epic fashion in science. One of the great thrills of special relativity and quantum mechanics is that they are so wildly non-intuitive for humans, and yet they are so powerfully instrumentally true in understanding absolute reams of phenomenon allowing us to correctly design communications satellites and transistors to name just two useful instrumentalities. So I suppose my belief against ” I can’t imagine it” as a useful way to learn the truth is a not-necessarily-logical extension of a powerful truth from one domain that I respect powerfully in to other domains.
Further, I CAN imagine a world in which slavery is moral. I can go two ways to imagine this: 1) mostly we don’t mind enslaving those who are not “people.” Are herds of cattle for food immoral? Is it unimaginable that they are moral? Well if you can’t imagine that is moral, what about cultivated fields of wheat? Human life in human bodies ends if we stop exploiting other life forms for nutritition. Sure, you can “draw the line” at chordates for whether cultivating a crop is “slavery” or not. Other people have drawn the line at clan members, family members, nation members, skin-color members. I’m sure there were many white slave holders in the southern U.S. who could not imagine a world in which enslaving white people was moral. Or enslaving British people. Or enslaving British aristocracy. So how far do you go to be sure you are not enslaving anything that shouldn’t be enslaved? Or do you trust your imagination that it is only people (or only chordates), even as you realize how powerfully other people’s imaginations have failed in the past?
I also reject all religious truth based on passed down stories of direct revelations from god. Again, this kind of belief fails epically in doing science, and I extend its failure there in to domains where perhaps it is not so easy to show it fails. And in my instrumentalist soul, I ultimately don’t care whether I am “right” or “wrong,” I would just rather use my limited time, energy, and brain-FLOPs pursuing falsifiable truths, and hope fort he best.
I understand this to imply that you cannot imagine a world in which you prefer to send someone into slavery than not do so, no matter what the context. Have I understood that correctly?
No, I can easily imagine a world in which I prefer to send someone into slavery than drink a drop of lemon juice: all I have to do is imagine that I’m a bad person. My point was that it’s easy to imagine any world in which my preferences are different, but I cannot imagine a world in which slavery is morally permissible (at least not without radically changing what slavery means).
How about a world in which by sending one person from your planet into slavery you defer the enslavement of the entire earth for 140 years? A world in which alien invaders which outgun us more than Europeans outgunned the tribes in Africa from which many of them took slaves, but who are willing for some reason we can’t comprehend to take one person you pick back to the home planet, 70 light years away. But failing your making that choice, they will stay here and at some expense to themselves enslave our entire race and planet?
Can you now imagine a world in which your sending someone in to slavery is not immoral? If so, how does this change in what you can and cannot imagine change your opinion of either the imagination standard or slavery’s moral status?
It seems to me most likely source of emotions, feelings, is evolution. We aren’t just evolved to run from a sabre tooth tiger, we have a rush of overwhelming fear as the instrumentality of our fleeing effectively. SImilarly, we have evolved, mammals as a whole, not just humans not even just primates, to be “social” animals meaning a tremendously important part of the environment was our group of other mammals. Long before we made the argument that slavery was wrong, we had strong feelings of wanting to resist the things that went along with being enslaved, while apparently we also had power feelings that assisted us in forcing others to do what we wanted.
Given the way emotions probably evolved, I think it does make sense to look to our emotions to guide us in knowing what strategies probably work better than others in interacting with our environment, but it doesn’t make sense to expect them to guide us correctly in corner cases, in rare situations in which there would have not been enough pay-off to have evolution do any fine tuning of emotional responses.
Ah, OK. Thanks for the clarification.
Can you imagine a world in which killing people is morally permissible?
Sure, I live in one. I chose slavery because it’s a pretty unequivocal case of moral badness, while killing is not such as in war, self-defense, execution, etc. I think probably rape, and certainly lying are things which are always morally wrong (I don’t think this entails that one should never do them, however).
My thought is just that at least at the core of them, moral beliefs aren’t subject to having been otherwise. I guess this is true of beliefs about logic too, though maybe not for the same reasons. And this doesn’t make either kind of belief immune to error, of course.
OK. Thanks for clarifying.
The trick Objective moralists play is to set truth against preference, when what you have are truths about preferences. Is it true, for you, that ice cream is yummy? For me, it is. That doesn’t make it any less a preference.
Here’s a simple example of a moral claim being tested and falsified:
A: What are you doing with that gun?
B: I’m shooting at this barrel. It’s a lot of fun.
A: What? Don’t do that! It’s wrong.
B: No, it’s not. There’s nothing wrong with shooting at a barrel.
A: But there’s a child inside that barrel! You could kill her.
B: You don’t know what you’re talking about. That barrel’s empty. Go look.
A: [looks in the barrel] Oh, you’re right. Sorry about that.
So here A made a moral claim with which B disagreed (“It’s wrong to shoot at that barrel.”). B proposed a test of the moral claim. A performed the test and the moral claim was falsified.
Now, I anticipate a number of objections to the adequacy of this example. I think they can all be answered, but instead of trying to predict how you will object or tediously listing all the objections I can think of, I’ll just wait for you to object (if you so desire) before responding.
I’m already part of this cadre. I know that electrons do not hate each other.
Without the assumption that shooting a child is immoral, this is not a moral argument. With that as an assumption, the moral component of the conclusion is assumed, not proven.
Find me the proof that shooting a child is immoral and we will be off to a good start.
If you’re looking for a test of a moral claim that does not rely on any background assumptions about morality, then I agree that I can’t give you an example. But that’s because your standard is way too high. When we test scientific hypotheses, the evidence is always interpreted in the context of background assumptions. If it’s kosher for scientific experiments to assume certain scientific facts (as they must), then why isn’t it kosher for moral experiments to assume certain moral facts?
Consider the analog of your position in the descriptive case: someone denies that there’s any fact of the matter about whether descriptive claims about the external world are true or false. This person says, “Show me how you’d test whether a descriptive claim is true or false.” Now you could presumably give all sorts of examples of such tests, but all of these examples will assume the truth of a host of other descriptive claims (minimally, that the experimental apparatus actually exists and the whole experiment isn’t a hallucination). If your interlocutor insisted that you give an example of a test that does not itself assume the truth of any descriptive claim, you would not be able to satisfy him.
So why demand that I must give an example of a test of a moral claim that does not assume the truth of any other moral claim? Why does the moral realist have this extra justificatory burden that the scientific realist does not? It’s fine to have problems with the specific assumptions being made in any particular experiment, and these can be discussed. Perhaps you think my particular assumptions are flawed for some reason. But if you have a general worry about all moral assumptions then you need to tell me why you don’t have a similar worry about non-normative assumptions. If you have a principled reason for this distinction, that would be the real basis of your moral anti-realism, not this business about untestability.
You position suggests that one cannot consistently be a physical realist and a moral anti-realist. Is that a fair summary of your position?
I do think moral and scientific reasoning are far less asymmetric than is usually assumed. But that doesn’t mean I think there are no asymmetries at all. Asymmetries exist, and perhaps they can be leveraged into an argument for moral anti-realism that is not also an argument against scientific realism. So I wouldn’t say it’s inconsistent to be a physical realist and a moral anti-realist. I will say that in my experience most people who hold that combination of positions will, upon interrogation, reveal an unjustified (but not necessarily unjustifiable) double standard in the way they treat moral discourse.
I don’t think it is a double standard. Empiricism admits the Problem of Induction, but says that the problem doesn’t justify retreating all the way to Cartesian skepticism. This position is supported by the fact that science makes good predictions—I would find the regularity of my sensory experiences surprising if physical realism were false. Plus, the principle of falsification (i.e. making beliefs pay rent) tells us what sorts of statements are worth paying attention to.
Moral reasoning seems to lack ay equivalent for either falsification or prediction. I don’t know what it means to try to falsify a statement like “Killings in these circumstances are not morally permissible.” And to the extent that predictions can be made based on the statement, they seem either false or historically contingent—it’s pretty easy to imagine my society having different rules about what killings are morally permissible simply by looking at how a nearby society came to its different conclusions.
In short, the problem of induction in empiricism seems very parallel to the is/ought problem in moral philosophy. But moral philosophy seems to lack the equivalent of practical arguments like accurate prediction that seem to rescue empiricism.
I do think one can offer a pragmatic justification for moral reasoning. It won’t be exactly parallel to the justification of scientific reasoning because moral and scientific discourse aren’t in the same business. Part of the double standard I was talking about involves applying scientific standards of evaluation to determine the success of moral reasoning. This is as much of an error as claiming that relativity is false because the nuclear bomb caused so much suffering. We don’t engage in moral reasoning in order to make accurate predictions about sensory experience. We engage in moral reasoning in order to direct action in such a way that our social environment becomes a better place. And I do think we have plenty of historical evidence that our particular system of moral reasoning has contributed to making the world a better place, just as our particular system of scientific reasoning has contributed to our increasing ability to control and predict the behavior of the world.
Now obviously there’s a circularity here. Our standards for judging that the world is better now that slavery is illegal and women can vote are internal to the very moral discourse we purport to be evaluating. But this kind of ultimate circularity is unavoidable when we attempt to justify any system of justification as a whole. It’s precisely the problem Hume pointed out when he talked about induction. Sure, we can appeal to past success as a justification of our inductive practices, but that justification only works if we are already committed to induction. Furthermore, our belief in the past success of the scientific method is based on historical data collected and interpreted in accord with this method. Somebody who rejects the scientific method wholesale may well say “Why should I believe any of these historical claims you are making?”
A completely transcendental justification, one that would be normative to any possible mind in mindspace, is an impossible goal in both moral and scientific reasoning. Any justification you offer for your justificatory practices is ultimately going to appeal to standards that are internal to those practices. That’s something we’ve all learned to live with in science, but there’s still a resistance to this unfortunate fact when it comes to moral discourse.
Our scientific schemes of justification are historically contingent in the same way. There are a number of other communities (extremely religious ones, for instance) that employ a different set of tools for justifying descriptive claims about the universe. Of course, our schemes of justification are better than theirs, as evidenced by their comparative lack of technological and predictive success. By the same token, though, our moral schemes of justification are more successful than those of, say, fundamentalist Islamic societies, as evidenced by our greater degree of moral progress. In both cases, the members of those other societies would disagree that we have done better than them, but that’s because they have different (and I would say incorrect) standards of evaluation.
Yes, there’s inherently a certain amount of unsatisfying circularity is everything. But that’s a weakness that calls for minimization of circularity.
Empiricism has only one circularly justified position: You can (more or less) trust the input of your senses—which implies some consistency over time. Everything else follows from that. Modern science is better than pytolemic science because it makes better predictions.
By contrast, there’s essentially no limit to moral circularity. There’s the realism premise: There is a part of the territory called “moral rightness”. Then you need a circular argument to show any particular moral premise (these killings are unjustified) is part of moral rightness. And there are multiple independent moral premises. (When killing is wrong does not shed much light on when lying is wrong). It’s not even clear that there are a finite number of circularly justified assertions.
So I hold empiricism to the same standard as moral realism, and moral realism seems to come up short. Further, my Minimization of Circular Justification principle is justified by worry about the ease of creating a result simply by making in an axiom. (That is, the Pythagorean Theorem is on a different footing if it is introduced as an axiom of Euclidean geometry rather than a derived result).
If your principle is actually that circular justification must be minimized, then why aren’t you an anti-realist about both scientific and moral claims? Surely that would involve less circular justification than your current position. You wouldn’t even have to commit yourself to the one circularly justified position assumed by empiricism.
In any case, scientific reasoning as a whole does not just reduce to the sort of minimal empiricism you describe. For starters, even if you assume that the input of your senses is trustworthy and will continue to remain trustworthy, this does not establish that induction based on the input of your senses is trustworthy. This is a separate assumption you must make. Your minimal empiricism also does not establish that simpler explanations of data tend to be better. This is a third assumption. It also doesn’t establish what it means for one explanation to be simpler than another. It doesn’t establish that the axioms on which the mathematical and statistical tools of science are based are true. I could go on.
Scientific justification as it’s actually practiced in the lab involves a huge suite of tools, and it is not true that the reliability of all these tools can be derived once you accept that you can trust the input of your senses. A person can be an empiricist in your sense while denying the reliability of statistical methods used in science, for instance. To convince them otherwise you will presumably present data that you think establishes the reliability of those methods. But in order for the data to deliver this conclusion, you need to use the same sorts of statistical methods that the skeptic is rejecting. I don’t see how your shared empiricism helps in this situation.
Our schemes of justification, both scientific and moral, have developed through a prolonged process of evolutionary and historical accretion. The specific historical reasons underlying the acceptance of particular tools into the toolbox are complex and variegated. It is implausible in either case that we could reconstruct the entire scheme from one or two simple assumptions.
If you’d like to separate the axiom about the reliability of the senses from the axiom that sensory input will remain consistent, I won’t actively resist—I think reliability of the senses implies consistency of the sense, but I’m not certain my formulation is more technically correct.
Regarding Ockham’s Razor—I’m not sure that is a fundamental principle or a useful rule of thumb. If MWI and Copenhagen really are in evidentiary equipose, I’m not sure I should have a preference for one or the other (that’s obviously not the consensus position in this community).
I think deductive reasoning produces necessary truths—so in a sense, I get statistics “for free” as long as I accept the Peano axioms. Other than that, I don’t understand the quoted assertion.
More generally, empirical philosophy provides a place to stop the recursion. I don’t think circular justifications work at all, so I think a separate justification for using this stopping place is required—I have memory of consistent sensory impressions, and that is difficult to explain except by believing that consistency is true. One could object that I can’t justify reliance on my memory—so I’m being hypocritical to allow my memories to justify themselves. Maybe so, but there’s no other principled stopping place for recursion—and continuing recursion past this point devolves to the point that I don’t think coherence is a workable concept.
To return to the comparison with morality, I suggest that all the axiomatic assertions in the empirical program are at a fundamental level. When you start doing object level science, recursion goes away entirely. By contrast, object level morality never gets away from [EDIT: recursion]. As you noted, it is impossible to say whether we’ve made moral progress without referencing what moral position is better.
If progress (scientific, moral, etc) really is possible, we ought to be able to get away from recursive reasoning. That we can’t when dealing with moral reasoning is not a good sign that moral reasoning is talking about some objective fact.
I don’t know what you mean by “object level morality never gets away from morality”. Read literally, that’s tautologically true, but I don’t see the relevance. Is this a typo?
Also, I’m not seeing the distinction here. When I’m engaged in object-level moral reasoning, or when I read examples of object-level moral reasoning on blogs or in newspapers, I very rarely come across recursion or circular justification. There’s usually an assumption that everyone in the community agrees that certain sorts of fundamental moral inferences are justified, and the debate is about whether those inferences can be made in a particular case. Here is a classic example of object-level moral reasoning. MLK offers a number of justifications for his moral stance on this particular issue. None of these justifications, as far as I can see, are circular. I don’t think this is atypical. Of course, if you think that every moral argument must also simultaneously justify the whole enterprise of objective moral evaluation, then every moral argument will have a circular component. But this places a disproportionately large burden on moral justification.
It’s true that if I want to argue that we have made moral progress I need to take for granted certain moral standards of evaluation, but if I want to argue that we have made scientific progress I need to take for granted certain scientific standards of evaluation. The only difference I can see is that the moral assumptions are as a matter of fact more contentious than the scientific ones, so perhaps moral debate breaks down on disagreement about foundational assumptions more often. But this is at least partly because most scientific debate is usually conducted in an institutional setting that has various mechanisms for consensus formation and weeding out sufficiently recalcitrant dissenters. Outside this setting, debate about descriptive issues is often just as contentious as moral debate. I know a number of new-agey people who have completely bizarre standards of epistemic justification. My discussion with them quite often breaks down on disagreement about foundational assumptions.
Yes, typo corrected.
That’s not my sense at all. Moral inferences are fairly easy (compared to cutting-edge scientific inferences). Toy example: If God wants us to attend church, the inference that church attendance should be compelled by the government follows quite easy. There are secondary negative effects, but the only reason to care about them is if the moral assertion that God wants church attendance is false.
When I read political arguments, they almost always operate by assuming agreement on the moral premise. When that assumption is falsified, the argument falls apart. Even for fairly ordinary moral disputes, the argument is usually based on moral principle, not facts or moral inference.
By contrast, equivalently basic scientific questions are fact and inference based. To decide how much weight a bridge can carry, knowing the strength of the steel and the design of the bridge is most of the work. In practice, those types of disputes don’t devolve into arguments about whether gravity is going to work this time.
Unless the secondary effects were that people are more likely to eat bacon for breakfast that day now that they aren’t able to sleep in and it also happens that God doesn’t want people to eat pigs.
You are describing instrumentalism, which is an unpopular position on this forum, where most follow EY’s realism. For a realist untestable questions have answers, justified on the basis of their preferred notion of the Occam’s razor.
Replace “moral truth” with “many worlds”, and you get the EY’s understanding of QM.
Concerns with confusing the map with the territory) are extensively discussed on this forum. If it walks like a duck and quacks like a duck, is it not instrumentalism?
The difference is whether you believe that even though it walks like a duck and quacks like a duck, it could be in fact a well-designed mechanical emulation of a duck indistinguishable from an organic duck, and then prefer the former model, because Occam’s razor!
Occam’s razor is a strategy for being a more effective instrumentalist. It may or may not be elevated to some other status, but this is at least one powerful draw that it has. Do not infer robot ducks when regular ducks will do, do not waste your efforts (instrumentality!) designing for robot ducks when your only evidence so far (razor) is ducks. Or ven more compactly in your belief: whether these ducks are “real” or “emulations,” only design for what you actually know about these ducks, not for something that takes a lot of untested message to presume about the ducks.
Do not spend a lot of time filling in the details of unreachable lands on your map.
Yep. Also, do not argue which of the many identical maps is better.
If you accept as “true” some statements that are not testable, and other statements that are testable, than perhaps we just have a labeling problem? We would have “true-and-I-can-prove-it” and “true-and-I-can’t-prove-it.” I’d be surprised if given those two categories there would be many people who wouldn’t elevate the testable statements above the untestable one in “truthiness.”
Is this different from having higher confidence in statements for which I have more evidence?
For me, if it is truly, knowably, not falsifiable, then there is no evidence for it that matters. Many things that are called not falsifiable are probably falsifiable eventually. So MWI, do we know QM so well that we know there are no implications of MWI that are not experimentally distinguishable from non-MWI theories? Something like MWI, for me, is something which probably is falsifiable at some level, I just don’t know how to falsify it right now and I am not aware of anybody that I trust that does know how to falsify it. Then the “argument” over MWI is really an argument over whether developing falsifiable theories from a story that includes MWI is more or less likely to be efficiently productive than developing falsifiable theories from a story that rejects MWI. We are arguing over the quality of intuitions years before the falsification or verification can actually take place. Much as we spend a lot of effort anticipating the implications of AI which is not even close to being built.
I actually think the discussion of MWI are useful, as someone who does participate in forming theories and opinions about theories. I just think it is NOT a discussion about scientific truth, or at least not yet it isn’t. It is not an argument over which horse won the last race, rather it is an argument over what kinds of horses will be running a race a few years from now, and which ones will win those races.
But yes, more evidence means more confidence which I think is entirely consistent with the map/territory/bayesian approach generally credited around here.
Where would mathematical statements fit in this classification of yours? They can be proven, but many of them can’t tested and even for the ones that can be tested the proof is generally considered better evidence than the test.
In fact, you are implicitly relying on a large untested (and mostly untestable) framework to describe the relationship between whatever sense input constitutes the result of one of your tests, and the proposition being tested.
There’s another category, necessary truths. The deductive inferences from premises are not susceptible to disproof.
Thus, the categories for this theory of truthful statements are: necessary truths, empirical truths (“i-can-prove-it”), and “truth-and-i-can’t-prove-it.”
Generally, this categorization scheme will put most contentious moral assertions into the third category.
Agreed except for your non-conventional use of the word “prove” which is normal restricted to things in the first category.
This may be a situation where the modern world’s resources start to break down the formerly strong separation between mind and world.
These days, most if not all of the rules of math can be coded into a computer, and new propositions tested or evaluated by those systems. Once I’ve implemented floating point math, I can SHOW STATISTICALLY the commutative law, the associative law, that 2+2 never equals 5, that numbers have additive and multiplicative inverses and on and on and on.
These modern machines seem to render the statements within axiomatic mathematical systems as testable and falsifiable as any other physical facts.
How would you do this for something like the Poincare conjecture or the unaccountability of the reals?
Also how do you show that your implementation does in fact compute addition without using math?
Frankly the argument you’re trying to make is like arguing that we no longer need farms since we can get our food from supermarkets.
Edit: Also the most you can show STATISTICALLY is that the commutative law holds for most (or nearly all) examples of the size you try, whereas mathematical proofs can show that it always holds.
The definition of proof is the issue. An instrumentalist requires falsifiable predictions, a realist settles for acceptable logic when no predictions are available.
A rationalist (in the original sense of the word) would go even further requiring a logical proof, and not accepting a mere prediction as a substitute.
How did instrumentalism and realism get identified as conflicting positions? There are forms of physical realism that conflict with instrumentalism—but instrumentalism is not inherently opposed to physical realism.
Not inherently, no. But the distinction is whether the notion of territory is a map (instrumentalism) or the territory (realism). It does not matter most of the time, but sometimes, like when discussing morality or quantum mechanics, is does.
I don’t understand. Can you give an example?
A realist finds is perfectly OK to argue which of the many identical maps is “truer” to the invisible underlying territory. An instrumentalist simply notes that there is no way to resolve this question to everyone’s satisfaction.
I’m objecting to your exclusion of instrumentalism from the realist label. An anti-realist says there is no territory. That’s not necessarily the position of the instrumentalist.
Right. Anti-realism makes an untestable and unprovable statement like this (so does anti-theism, by the way). An instrumentalist says that there is no way to tell if there is one, and that the map/territory distinction is an often useful model, so why not use it when it makes sense.
Well, this is an argument about labels, definitions and identities, which is rarely productive. You can either postulate that there is this territory/reality thing independent of what anyone thinks about it, or you can call it a model which works better in some cases and worse in others. I don’t really care what label you assign to each position.
Respectfully, you were the one invoking technical jargon to do some analytical work.
Without jargon: I think there is physical reality external to human minds. I think that the best science can do is make better predictions—accurately describing reality is harder.
You suggest there is unresolvable tension between those positions.
It’s a useful model, yes.
The assumption that “accurately describing reality” is even possible is a bad model, because you can never tell if you are done. And if it is not possible, then there is no point postulating this reality thing. Might as well avoid it and stick with something that is indisputable: it is possible to build successively better models.
Yes, one of them postulates something that cannot be tested. if you are into Occam’s razor, that’s something that fails it.
We can’t talk about testing propositions against reality until we decide whether there is a reality to test it against. If you are uncertain about that point, the nuances between predicting reality and modelling reality are not on point—and probably confuse the analysis more than they shed any light.
If someone walked into one of your high-end physics lectures and wanted to talk about whether there was reality (see Cartesian doubt), I think you would tell him that the physics class was not the venue for that type of conversation. If you tried to answer his questions while also answering other students’ questions, everything would get hopelessly confused.
I never did. I talk about testing propositions against experiment, without postulating a mysterious untestable reality behind those experiments.
Unlike the model you call reality, the existence of repeatable experiments is a repeatable experimental fact.
What is an experiment but testing a proposition against reality?
That’s the realist’s approach. To me, you test a proposition with an experiment, not against anything.
If the experiment is not a way to tap into reality (in some extremely metaphorical sense), why should I care about the experimental results when trying to decide whether my proposition is true?
If you want to know how far a rock you throw will land (a prediction based on a model constructed based on previously performed experiments), you want your model to have the necessary predictive power. Whether it corresponds to some metaphysical concept of reality is quite secondary.
That doesn’t answer my question. To rephrase using your new example, if the prior experiments do not metaphorically “tap into reality,” why should I have any confidence that a model based on those experimental results will be useful in predicting future events?
Well, either the experimental result has predictive power, or it doesn’t. If certain kinds of experimental results prove useful for predicting the future, then I should have confidence in predictions based on (models based on) those results. Whether I call them “reality” or “a model” doesn’t really matter very much.
More generally, to my way of thinking; this whole “instrumentalists don’t believe in reality” business mostly seems like a distinction in how we use words rather than in what experiences we anticipate.
It would potentially make a difference, I suppose, if soi-disant instrumentalists didn’t actually expect the results of different experiments to be reconcilable with one another (under the principle that each experiment was operating on its own model, after all, and there’s no reason to expect those models to have any particular relationship to one another). But for the most part, that doesn’t seem to be the case.
There’s a bit of that when it comes to quirky quantum results, I gather, but to my mind that’s kind of an “instrumentalism of the gaps”… when past researchers have come up with a unified model we accept that unified model, but when current data doesn’t seem unified given our current understanding, rather than seeking a unified model we shrug our shoulders and accept the inconsistency, because hey, they’re just models, it’s not like there’s any real underlying territory.
Which in practice just means we wait for someone else to do the hard work of reconciling it all.
Because it has been experimentally confirmed before, and from experience we can assign a high probability that a model that has been working well in the past will continue to work in the similar circumstances in the future.